-----============= acceptance-small: sanity ============----- Thu Apr 18 20:16:16 EDT 2024 excepting tests: 225 255 256 400a 42a 42c 42b 118c 118d 407 411b 130b 130c 130d 130e 130f 130g 312 skipping tests SLOW=no: 27m 60i 64b 68 71 135 136 230d 300o 842 51b === sanity: start setup 20:16:20 (1713485780) === oleg216-client.virtnet: executing check_config_client /mnt/lustre oleg216-client.virtnet: Checking config lustre mounted on /mnt/lustre Checking servers environments Checking clients oleg216-client.virtnet environments Using TIMEOUT=20 osc.lustre-OST0000-osc-ffff8800b6db0000.idle_timeout=debug osc.lustre-OST0001-osc-ffff8800b6db0000.idle_timeout=debug disable quota as required oleg216-server: oleg216-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all === sanity: finish setup 20:16:26 (1713485786) === running as uid/gid/euid/egid 500/500/500/500, groups: [true] running as uid/gid/euid/egid 500/500/500/500, groups: [touch] [/mnt/lustre/d0_runas_test/f6927] preparing for tests involving mounts mke2fs 1.46.2.wc5 (26-Mar-2022) debug=all debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 60a: llog_test run from kernel module and test llog_reader ========================================================== 20:16:29 (1713485789) SKIP: sanity test_60a missing subtest run-llog.sh SKIP 60a (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 60b: limit repeated messages from CERROR/CWARN ========================================================== 20:16:32 (1713485792) PASS 60b (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 60c: unlink file when mds full ============ 20:16:35 (1713485795) create 5000 files - open/close 3040 (time 1713485806.82 total 10.00 last 303.94) total: 5000 open/close in 17.41 seconds: 287.12 ops/second fail_loc=0x80000137 - unlinked 0 (time 1713485815 ; total 0 ; last 0) total: 5000 unlinks in 12 seconds: 416.666656 unlinks/second fail_loc=0 PASS 60c (34s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 60d: test printk console message masking == 20:17:10 (1713485830) printk=0 emerg PASS 60d (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 60e: no space while new llog is being created ========================================================== 20:17:13 (1713485833) fail_loc=0x15b PASS 60e (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 60f: change debug_path works ============== 20:17:17 (1713485837) debug_path=/tmp/f60f.sanity fail_loc=0x8000050e ls: cannot access /tmp/f60f.sanity*: No such file or directory 0 /tmp/f60f.sanity.1713485837.12985 debug_path=/tmp/lustre-log PASS 60f (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 60g: transaction abort won't cause MDT hung ========================================================== 20:17:20 (1713485840) /home/green/git/lustre-release/lustre/tests/test-framework.sh: line 4729: 13903 Killed ( local index=0; while true; do $LFS setdirstripe -i $(($index % $MDSCOUNT)) -c $MDSCOUNT $DIR/$tdir/subdir$index 2> /dev/null; mkdir $DIR/$tdir/subdir$index 2> /dev/null; rmdir $DIR/$tdir/subdir$index 2> /dev/null; index=$((index + 1)); done ) Started LFSCK on the device lustre-MDT0000: scrub namespace /mnt/lustre/d60g.sanity: subdir104 subdir113 subdir141 subdir150 subdir169 subdir190 subdir283 subdir326 subdir347 subdir369 subdir380 subdir391 subdir412 subdir43 subdir432 subdir442 subdir473 subdir52 subdir526 subdir577 subdir605 subdir61 subdir624 subdir643 subdir652 subdir680 subdir690 subdir7 subdir70 subdir700 subdir748 subdir757 subdir766 subdir785 subdir79 subdir794 subdir813 subdir87 subdir877 subdir886 subdir905 subdir914 subdir924 subdir943 subdir962 subdir981 subdir982 /mnt/lustre/d60g.sanity/subdir104: /mnt/lustre/d60g.sanity/subdir113: /mnt/lustre/d60g.sanity/subdir141: /mnt/lustre/d60g.sanity/subdir150: /mnt/lustre/d60g.sanity/subdir169: /mnt/lustre/d60g.sanity/subdir190: /mnt/lustre/d60g.sanity/subdir283: /mnt/lustre/d60g.sanity/subdir326: /mnt/lustre/d60g.sanity/subdir347: /mnt/lustre/d60g.sanity/subdir369: /mnt/lustre/d60g.sanity/subdir380: /mnt/lustre/d60g.sanity/subdir391: /mnt/lustre/d60g.sanity/subdir412: /mnt/lustre/d60g.sanity/subdir43: /mnt/lustre/d60g.sanity/subdir432: /mnt/lustre/d60g.sanity/subdir442: /mnt/lustre/d60g.sanity/subdir473: /mnt/lustre/d60g.sanity/subdir52: /mnt/lustre/d60g.sanity/subdir526: /mnt/lustre/d60g.sanity/subdir577: /mnt/lustre/d60g.sanity/subdir605: /mnt/lustre/d60g.sanity/subdir61: /mnt/lustre/d60g.sanity/subdir624: /mnt/lustre/d60g.sanity/subdir643: /mnt/lustre/d60g.sanity/subdir652: /mnt/lustre/d60g.sanity/subdir680: /mnt/lustre/d60g.sanity/subdir690: /mnt/lustre/d60g.sanity/subdir7: /mnt/lustre/d60g.sanity/subdir70: /mnt/lustre/d60g.sanity/subdir700: /mnt/lustre/d60g.sanity/subdir748: /mnt/lustre/d60g.sanity/subdir757: /mnt/lustre/d60g.sanity/subdir766: /mnt/lustre/d60g.sanity/subdir785: /mnt/lustre/d60g.sanity/subdir79: /mnt/lustre/d60g.sanity/subdir794: /mnt/lustre/d60g.sanity/subdir813: /mnt/lustre/d60g.sanity/subdir87: /mnt/lustre/d60g.sanity/subdir877: /mnt/lustre/d60g.sanity/subdir886: /mnt/lustre/d60g.sanity/subdir905: /mnt/lustre/d60g.sanity/subdir914: /mnt/lustre/d60g.sanity/subdir924: /mnt/lustre/d60g.sanity/subdir943: /mnt/lustre/d60g.sanity/subdir962: /mnt/lustre/d60g.sanity/subdir981: /mnt/lustre/d60g.sanity/subdir982: PASS 60g (28s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 60h: striped directory with missing stripes can be accessed ========================================================== 20:17:50 (1713485870) SKIP: sanity test_60h Need at least 2 MDTs SKIP 60h (0s) debug_raw_pointers=0 debug_raw_pointers=0 SKIP: sanity test_60i skipping SLOW test 60i debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 60j: llog_reader reports corruptions ====== 20:17:53 (1713485873) SKIP: sanity test_60j ldiskfs only test SKIP 60j (0s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 61a: mmap() writes don't make sync hang ========================================================================== 20:17:55 (1713485875) 1+0 records in 1+0 records out 4096 bytes (4.1 kB) copied, 0.0043223 s, 948 kB/s PASS 61a (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 61b: mmap() of unstriped file is successful ========================================================== 20:17:59 (1713485879) PASS 61b (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 63a: Verify oig_wait interruption does not crash ================================================================= 20:18:02 (1713485882) /home/green/git/lustre-release/lustre/tests/sanity.sh: line 9616: 21042 Terminated dd if=/dev/zero of=$DIR/f63 bs=8k /home/green/git/lustre-release/lustre/tests/sanity.sh: line 9616: 21055 Terminated dd if=/dev/zero of=$DIR/f63 bs=8k /home/green/git/lustre-release/lustre/tests/sanity.sh: line 9616: 21062 Terminated dd if=/dev/zero of=$DIR/f63 bs=8k /home/green/git/lustre-release/lustre/tests/sanity.sh: line 9616: 21070 Terminated dd if=/dev/zero of=$DIR/f63 bs=8k /home/green/git/lustre-release/lustre/tests/sanity.sh: line 9616: 21078 Terminated dd if=/dev/zero of=$DIR/f63 bs=8k /home/green/git/lustre-release/lustre/tests/sanity.sh: line 9616: 21086 Terminated dd if=/dev/zero of=$DIR/f63 bs=8k /home/green/git/lustre-release/lustre/tests/sanity.sh: line 9616: 21094 Terminated dd if=/dev/zero of=$DIR/f63 bs=8k /home/green/git/lustre-release/lustre/tests/sanity.sh: line 9616: 21101 Terminated dd if=/dev/zero of=$DIR/f63 bs=8k /home/green/git/lustre-release/lustre/tests/sanity.sh: line 9616: 21109 Terminated dd if=/dev/zero of=$DIR/f63 bs=8k /home/green/git/lustre-release/lustre/tests/sanity.sh: line 9616: 21117 Terminated dd if=/dev/zero of=$DIR/f63 bs=8k checking grant......UUID 1K-blocks Used Available Use% Mounted on lustre-MDT0000_UUID 2210688 3840 2204800 1% /mnt/lustre[MDT:0] lustre-OST0000_UUID 3771392 3072 3766272 1% /mnt/lustre[OST:0] lustre-OST0001_UUID 3771392 50176 3717120 2% /mnt/lustre[OST:1] filesystem_summary: 7542784 53248 7483392 1% /mnt/lustre pass grant check: client:499122176 server:499122176 PASS 63a (62s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 63b: async write errors should be returned to fsync ============================================================= 20:19:06 (1713485946) debug=-1 1+0 records in 1+0 records out 4096 bytes (4.1 kB) copied, 0.00562701 s, 728 kB/s fail_loc=0x80000406 fsync: Input/output error debug=trace inode super iotrace malloc cache info ioctl neterror net warning buffs other dentry nettrace page dlmtrace error emerg ha rpctrace vfstrace reada mmap config console quota sec lfsck hsm snapshot layout debug=super ioctl neterror warning dlmtrace error emerg ha rpctrace vfstrace config console lfsck checking grant......UUID 1K-blocks Used Available Use% Mounted on lustre-MDT0000_UUID 2210688 3840 2204800 1% /mnt/lustre[MDT:0] lustre-OST0000_UUID 3771392 3072 3766272 1% /mnt/lustre[OST:0] lustre-OST0001_UUID 3771392 3072 3764224 1% /mnt/lustre[OST:1] filesystem_summary: 7542784 6144 7530496 1% /mnt/lustre pass grant check: client:499122176 server:499122176 PASS 63b (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 64a: verify filter grant calculations (in kernel) =============================================================== 20:19:12 (1713485952) UUID 1K-blocks Used Available Use% Mounted on lustre-MDT0000_UUID 2210688 3968 2204672 1% /mnt/lustre[MDT:0] lustre-OST0000_UUID 3771392 3072 3766272 1% /mnt/lustre[OST:0] lustre-OST0001_UUID 3771392 3072 3764224 1% /mnt/lustre[OST:1] filesystem_summary: 7542784 6144 7530496 1% /mnt/lustre osc.lustre-OST0000-osc-ffff8800b6db0000.cur_lost_grant_bytes=1703936 osc.lustre-OST0001-osc-ffff8800b6db0000.cur_lost_grant_bytes=3407872 osc.lustre-OST0000-osc-ffff8800b6db0000.cur_grant_bytes=5111808 osc.lustre-OST0001-osc-ffff8800b6db0000.cur_grant_bytes=488898560 checking grant......UUID 1K-blocks Used Available Use% Mounted on lustre-MDT0000_UUID 2210688 3968 2204672 1% /mnt/lustre[MDT:0] lustre-OST0000_UUID 3771392 3072 3766272 1% /mnt/lustre[OST:0] lustre-OST0001_UUID 3771392 3072 3764224 1% /mnt/lustre[OST:1] filesystem_summary: 7542784 6144 7530496 1% /mnt/lustre pass grant check: client:499122176 server:499122176 PASS 64a (2s) debug_raw_pointers=0 debug_raw_pointers=0 SKIP: sanity test_64b skipping SLOW test 64b debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 64c: verify grant shrink ================== 20:19:16 (1713485956) osc.lustre-OST0000-osc-ffff8800b6db0000.cur_grant_bytes=0 checking grant......UUID 1K-blocks Used Available Use% Mounted on lustre-MDT0000_UUID 2210688 3968 2204672 1% /mnt/lustre[MDT:0] lustre-OST0000_UUID 3771392 3072 3766272 1% /mnt/lustre[OST:0] lustre-OST0001_UUID 3771392 3072 3764224 1% /mnt/lustre[OST:1] filesystem_summary: 7542784 6144 7530496 1% /mnt/lustre pass grant check: client:497418240 server:497418240 PASS 64c (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 64d: check grant limit exceed ============= 20:19:20 (1713485960) 1000+0 records in 1000+0 records out 1048576000 bytes (1.0 GB) copied, 5.69158 s, 184 MB/s /home/green/git/lustre-release/lustre/tests/sanity.sh: line 9745: kill: (24647) - No such process checking grant......UUID 1K-blocks Used Available Use% Mounted on lustre-MDT0000_UUID 2210688 3968 2204672 1% /mnt/lustre[MDT:0] lustre-OST0000_UUID 3771392 1027072 2742272 28% /mnt/lustre[OST:0] lustre-OST0001_UUID 3771392 3072 3764224 1% /mnt/lustre[OST:1] filesystem_summary: 7542784 1030144 6506496 14% /mnt/lustre pass grant check: client:983433216 server:983433216 sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 64d (23s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 64e: check grant consumption (no grant allocation) ========================================================== 20:19:45 (1713485985) debug=+cache Stopping client oleg216-client.virtnet /mnt/lustre (opts:) Starting client: oleg216-client.virtnet: -o user_xattr,flock oleg216-server@tcp:/lustre /mnt/lustre fail_loc=0x725 1+0 records in 1+0 records out 360448 bytes (360 kB) copied, 0.0328299 s, 11.0 MB/s fail_loc=0 Stopping client oleg216-client.virtnet /mnt/lustre (opts:) Starting client: oleg216-client.virtnet: -o user_xattr,flock oleg216-server@tcp:/lustre /mnt/lustre fail_loc=0x725 fail_loc=0 PASS 64e (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 64f: check grant consumption (with grant allocation) ========================================================== 20:19:50 (1713485990) debug=+cache Stopping client oleg216-client.virtnet /mnt/lustre (opts:) Starting client: oleg216-client.virtnet: -o user_xattr,flock oleg216-server@tcp:/lustre /mnt/lustre 1+0 records in 1+0 records out 593920 bytes (594 kB) copied, 0.0400371 s, 14.8 MB/s Stopping client oleg216-client.virtnet /mnt/lustre (opts:) Starting client: oleg216-client.virtnet: -o user_xattr,flock oleg216-server@tcp:/lustre /mnt/lustre fail_loc=0x50a fail_val=3 1+0 records in 1+0 records out 593920 bytes (594 kB) copied, 0.0190301 s, 31.2 MB/s fail_loc=0 fail_val=0 PASS 64f (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 64g: grant shrink on MDT ================== 20:19:54 (1713485994) 1+0 records in 1+0 records out 131072 bytes (131 kB) copied, 0.00594726 s, 22.0 MB/s 1+0 records in 1+0 records out 131072 bytes (131 kB) copied, 0.00578018 s, 22.7 MB/s 1+0 records in 1+0 records out 131072 bytes (131 kB) copied, 0.00559669 s, 23.4 MB/s 1+0 records in 1+0 records out 131072 bytes (131 kB) copied, 0.00606885 s, 21.6 MB/s 1+0 records in 1+0 records out 131072 bytes (131 kB) copied, 0.0131889 s, 9.9 MB/s 1+0 records in 1+0 records out 131072 bytes (131 kB) copied, 0.00657969 s, 19.9 MB/s 1+0 records in 1+0 records out 131072 bytes (131 kB) copied, 0.00553361 s, 23.7 MB/s 1+0 records in 1+0 records out 131072 bytes (131 kB) copied, 0.00621463 s, 21.1 MB/s 1+0 records in 1+0 records out 131072 bytes (131 kB) copied, 0.00909541 s, 14.4 MB/s 1+0 records in 1+0 records out 131072 bytes (131 kB) copied, 0.00716603 s, 18.3 MB/s 0 grants, 0 pages mdc.lustre-MDT0000-mdc-ffff88012b4ca800.grant_shrink_interval=1200 PASS 64g (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 64h: grant shrink on read ================= 20:19:58 (1713485998) osc.lustre-OST0000-osc-ffff88012b4ca800.grant_shrink=1 osc.lustre-OST0000-osc-ffff88012b4ca800.grant_shrink_interval=10 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 0.838196 s, 12.5 MB/s 1+0 records in 1+0 records out 4096 bytes (4.1 kB) copied, 0.00655789 s, 625 kB/s PASS 64h (10s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 64i: shrink on reconnect ================== 20:20:10 (1713486010) 64+0 records in 64+0 records out 67108864 bytes (67 MB) copied, 1.77805 s, 37.7 MB/s fail_loc=0x80000513 fail_val=17 osc.lustre-OST0000-osc-ffff88012b4ca800.cur_grant_bytes=65601536B Failing ost1 on oleg216-server Stopping /mnt/lustre-ost1 (opts:) on oleg216-server 20:20:13 (1713486013) shut down Failover ost1 to oleg216-server mount facets: ost1 Starting ost1: -o localrecov lustre-ost1/ost1 /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg216-server: oleg216-server.virtnet: executing set_default_debug all all pdsh@oleg216-client: oleg216-server: ssh exited with exit code 1 Started lustre-OST0000 20:20:26 (1713486026) targets are mounted 20:20:26 (1713486026) facet_failover done oleg216-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 0 sec 8+0 records in 8+0 records out 8388608 bytes (8.4 MB) copied, 0.227747 s, 36.8 MB/s PASS 64i (21s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 65a: directory with no stripe info ======== 20:20:33 (1713486033) default stripe 1, ost count 2 PASS 65a (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 65b: directory setstripe -S stripe_size*2 -i 0 -c 1 ========================================================== 20:20:36 (1713486036) dir stripe 1, default stripe 1, ost count 2 PASS 65b (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 65c: directory setstripe -S stripe_size*4 -i 1 -c 1 ========================================================== 20:20:39 (1713486039) dir stripe 1, default stripe 1, ost count 2 PASS 65c (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 65d: directory setstripe -S stripe_size -c stripe_count ========================================================== 20:20:42 (1713486042) dir stripe 0, default stripe 1, ost count 2 PASS 65d (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 65e: directory setstripe defaults ========= 20:20:46 (1713486046) (Default) /mnt/lustre/d65e.sanity default stripe 1, ost count 2 PASS 65e (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 65f: dir setstripe permission (should return error) ============================================================= 20:20:49 (1713486049) running as uid/gid/euid/egid 500/500/500/500, groups: [/home/green/git/lustre-release/lustre/utils/lfs] [setstripe] [/mnt/lustre/d65f.sanityf] lfs setstripe: setstripe error for '/mnt/lustre/d65f.sanityf': Operation not permitted PASS 65f (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 65g: directory setstripe -d =============== 20:20:52 (1713486052) (Default) /mnt/lustre/d65g.sanity PASS 65g (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 65h: directory stripe info inherit ============================================================================== 20:20:56 (1713486056) PASS 65h (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 65i: various tests to set root directory striping ========================================================== 20:20:59 (1713486059) /mnt/lustre stripe_count: -1 stripe_size: 65536 pattern: raid0 stripe_offset: -1 /mnt/lustre/f61b.sanity has no stripe info /mnt/lustre/d65f.sanityf stripe_count: -1 stripe_size: 65536 pattern: raid0 stripe_offset: -1 /mnt/lustre/d65b.sanity stripe_count: 1 stripe_size: 8388608 pattern: raid0 stripe_offset: 0 /mnt/lustre/f64f.sanity lmm_stripe_count: 1 lmm_stripe_size: 4194304 lmm_pattern: raid0 lmm_layout_gen: 0 lmm_stripe_offset: 0 obdidx objid objid group 0 2510 0x9ce 0x240000400 /mnt/lustre/d60f.sanity stripe_count: -1 stripe_size: 65536 pattern: raid0 stripe_offset: -1 /mnt/lustre/f60b.sanity lmm_stripe_count: 1 lmm_stripe_size: 4194304 lmm_pattern: raid0 lmm_layout_gen: 0 lmm_stripe_offset: 1 obdidx objid objid group 1 2 0x2 0x280000400 /mnt/lustre/d65d.sanity stripe_count: 1 stripe_size: 4194304 pattern: raid0 stripe_offset: -1 /mnt/lustre/d65g.sanity stripe_count: -1 stripe_size: 65536 pattern: raid0 stripe_offset: -1 /mnt/lustre/f61 lmm_stripe_count: 1 lmm_stripe_size: 4194304 lmm_pattern: raid0 lmm_layout_gen: 0 lmm_stripe_offset: 0 obdidx objid objid group 0 2504 0x9c8 0x240000400 /mnt/lustre/d65h.sanity stripe_count: 1 stripe_size: 8388608 pattern: raid0 stripe_offset: 0 /mnt/lustre/d65e.sanity stripe_count: -1 stripe_size: 65536 pattern: raid0 stripe_offset: -1 /mnt/lustre/f63b.sanity lmm_stripe_count: 1 lmm_stripe_size: 4194304 lmm_pattern: raid0 lmm_layout_gen: 0 lmm_stripe_offset: 1 obdidx objid objid group 1 2505 0x9c9 0x280000400 /mnt/lustre/d65c.sanity stripe_count: 1 stripe_size: 16777216 pattern: raid0 stripe_offset: 1 /mnt/lustre/d65a.sanity stripe_count: -1 stripe_size: 65536 pattern: raid0 stripe_offset: -1 /mnt/lustre lmm_fid: [0x200000007:0x1:0x0] stripe_count: -1 stripe_size: 65536 pattern: raid0 stripe_offset: -1 /mnt/lustre/f61b.sanity has no stripe info /mnt/lustre/d65f.sanityf lmm_fid: [0x200000406:0x27:0x0] stripe_count: -1 stripe_size: 65536 pattern: raid0 stripe_offset: -1 /mnt/lustre/d65b.sanity lmm_fid: [0x200000406:0x1e:0x0] stripe_count: 1 stripe_size: 8388608 pattern: raid0 stripe_offset: 0 /mnt/lustre/f64f.sanity lmm_magic: 0x0BD10BD0 lmm_seq: 0x200000406 lmm_object_id: 0x1 lmm_fid: [0x200000406:0x1:0x0] lmm_stripe_count: 1 lmm_stripe_size: 4194304 lmm_pattern: raid0 lmm_layout_gen: 0 lmm_stripe_offset: 0 obdidx objid objid group 0 2510 0x9ce 0x240000400 /mnt/lustre/d60f.sanity lmm_fid: [0x200000401:0x138d:0x0] stripe_count: -1 stripe_size: 65536 pattern: raid0 stripe_offset: -1 /mnt/lustre/f60b.sanity lmm_magic: 0x0BD10BD0 lmm_seq: 0x200000401 lmm_object_id: 0x3 lmm_fid: [0x200000401:0x3:0x0] lmm_stripe_count: 1 lmm_stripe_size: 4194304 lmm_pattern: raid0 lmm_layout_gen: 0 lmm_stripe_offset: 1 obdidx objid objid group 1 2 0x2 0x280000400 /mnt/lustre/d65d.sanity lmm_fid: [0x200000406:0x22:0x0] stripe_count: 1 stripe_size: 4194304 pattern: raid0 stripe_offset: -1 /mnt/lustre/d65g.sanity lmm_fid: [0x200000406:0x28:0x0] stripe_count: -1 stripe_size: 65536 pattern: raid0 stripe_offset: -1 /mnt/lustre/f61 lmm_magic: 0x0BD10BD0 lmm_seq: 0x200000401 lmm_object_id: 0x1b3e lmm_fid: [0x200000401:0x1b3e:0x0] lmm_stripe_count: 1 lmm_stripe_size: 4194304 lmm_pattern: raid0 lmm_layout_gen: 0 lmm_stripe_offset: 0 obdidx objid objid group 0 2504 0x9c8 0x240000400 /mnt/lustre/d65h.sanity lmm_fid: [0x200000406:0x29:0x0] stripe_count: 1 stripe_size: 8388608 pattern: raid0 stripe_offset: 0 /mnt/lustre/d65e.sanity lmm_fid: [0x200000406:0x25:0x0] stripe_count: -1 stripe_size: 65536 pattern: raid0 stripe_offset: -1 /mnt/lustre/f63b.sanity lmm_magic: 0x0BD10BD0 lmm_seq: 0x200000401 lmm_object_id: 0x1b4a lmm_fid: [0x200000401:0x1b4a:0x0] lmm_stripe_count: 1 lmm_stripe_size: 4194304 lmm_pattern: raid0 lmm_layout_gen: 0 lmm_stripe_offset: 1 obdidx objid objid group 1 2505 0x9c9 0x280000400 /mnt/lustre/d65c.sanity lmm_fid: [0x200000406:0x20:0x0] stripe_count: 1 stripe_size: 16777216 pattern: raid0 stripe_offset: 1 /mnt/lustre/d65a.sanity lmm_fid: [0x200000406:0x1c:0x0] stripe_count: -1 stripe_size: 65536 pattern: raid0 stripe_offset: -1 PASS 65i (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 65j: set default striping on root directory (bug 6367)=========================================================== 20:21:02 (1713486062) PASS 65j (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 65k: validate manual striping works properly with deactivated OSCs ========================================================== 20:21:07 (1713486067) Check OST status: lustre-OST0000-osc-MDT0000 is active lustre-OST0001-osc-MDT0000 is active total: 1000 open/close in 6.68 seconds: 149.79 ops/second Deactivate: lustre-OST0000-osc-MDT0000 /home/green/git/lustre-release/lustre/utils/lfs setstripe -i 0 -c 1 /mnt/lustre/d65k.sanity/0 /home/green/git/lustre-release/lustre/utils/lfs setstripe -i 1 -c 1 /mnt/lustre/d65k.sanity/1 - unlinked 0 (time 1713486081 ; total 0 ; last 0) total: 1000 unlinks in 4 seconds: 250.000000 unlinks/second lustre-OST0000-osc-MDT0000 is Activate oleg216-server: oleg216-server.virtnet: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid 50 oleg216-server: os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid in FULL state after 0 sec total: 1000 open/close in 3.28 seconds: 304.55 ops/second Deactivate: lustre-OST0001-osc-MDT0000 /home/green/git/lustre-release/lustre/utils/lfs setstripe -i 0 -c 1 /mnt/lustre/d65k.sanity/0 /home/green/git/lustre-release/lustre/utils/lfs setstripe -i 1 -c 1 /mnt/lustre/d65k.sanity/1 - unlinked 0 (time 1713486094 ; total 0 ; last 0) total: 1000 unlinks in 2 seconds: 500.000000 unlinks/second lustre-OST0001-osc-MDT0000 is Activate oleg216-server: oleg216-server.virtnet: executing wait_import_state FULL os[cp].lustre-OST0001-osc-MDT0000.ost_server_uuid 50 oleg216-server: os[cp].lustre-OST0001-osc-MDT0000.ost_server_uuid in FULL state after 0 sec PASS 65k (32s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 65l: lfs find on -1 stripe dir ================================================================================== 20:21:41 (1713486101) PASS 65l (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 65m: normal user can't set filesystem default stripe ========================================================== 20:21:45 (1713486105) running as uid/gid/euid/egid 500/500/500/500, groups: [/home/green/git/lustre-release/lustre/utils/lfs] [setstripe] [-c] [2] [/mnt/lustre] lfs setstripe: setstripe error for '/mnt/lustre': Operation not permitted PASS 65m (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 65n: don't inherit default layout from root for new subdirectories ========================================================== 20:21:49 (1713486109) Creating new pool oleg216-server: Pool lustre.test_65n created Adding targets to pool oleg216-server: OST lustre-OST0000_UUID added to pool lustre.test_65n oleg216-server: OST lustre-OST0001_UUID added to pool lustre.test_65n Waiting 90s for 'lustre-OST0000_UUID lustre-OST0001_UUID ' /home/green/git/lustre-release/lustre/utils/lfs getstripe -d /mnt/lustre/d65n.sanity-4 stripe_count: 1 stripe_size: 4194304 pattern: raid0 stripe_offset: -1 pool: test_65n /home/green/git/lustre-release/lustre/utils/lfs getstripe -d /mnt/lustre stripe_count: 1 stripe_size: 4194304 pattern: raid0 stripe_offset: -1 pool: test_65n SKIP: sanity test_65n needs >= 2 MDTs Destroy the created pools: test_65n lustre.test_65n oleg216-server: OST lustre-OST0000_UUID removed from pool lustre.test_65n oleg216-server: OST lustre-OST0001_UUID removed from pool lustre.test_65n oleg216-server: Pool lustre.test_65n destroyed SKIP 65n (13s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 65o: pool inheritance for mdt component === 20:22:05 (1713486125) Creating new pool oleg216-server: Pool lustre.test_65o created Adding targets to pool oleg216-server: OST lustre-OST0000_UUID added to pool lustre.test_65o oleg216-server: OST lustre-OST0001_UUID added to pool lustre.test_65o /mnt/lustre/d65o.sanity lcm_layout_gen: 0 lcm_mirror_count: 1 lcm_entry_count: 2 lcme_id: N/A lcme_mirror_id: N/A lcme_flags: 0 lcme_extent.e_start: 0 lcme_extent.e_end: 1048576 stripe_count: 0 stripe_size: 1048576 pattern: mdt stripe_offset: -1 lcme_id: N/A lcme_mirror_id: N/A lcme_flags: 0 lcme_extent.e_start: 1048576 lcme_extent.e_end: EOF stripe_count: 1 stripe_size: 1048576 pattern: raid0 stripe_offset: -1 pool: test_65o /mnt/lustre/d65o.sanity/dir2 lcm_layout_gen: 0 lcm_mirror_count: 1 lcm_entry_count: 2 lcme_id: N/A lcme_mirror_id: N/A lcme_flags: 0 lcme_extent.e_start: 0 lcme_extent.e_end: 1048576 stripe_count: 0 stripe_size: 1048576 pattern: mdt stripe_offset: -1 lcme_id: N/A lcme_mirror_id: N/A lcme_flags: 0 lcme_extent.e_start: 1048576 lcme_extent.e_end: EOF stripe_count: 1 stripe_size: 1048576 pattern: raid0 stripe_offset: -1 pool: test_65o lcm_layout_gen: 0 lcm_mirror_count: 1 lcm_entry_count: 1 lcme_id: N/A lcme_mirror_id: N/A lcme_flags: 0 lcme_extent.e_start: 0 lcme_extent.e_end: EOF stripe_count: 1 stripe_size: 4194304 pattern: raid0 stripe_offset: -1 pool: test_65o Destroy the created pools: test_65o lustre.test_65o oleg216-server: OST lustre-OST0000_UUID removed from pool lustre.test_65o oleg216-server: OST lustre-OST0001_UUID removed from pool lustre.test_65o oleg216-server: Pool lustre.test_65o destroyed Waiting 90s for 'foo' PASS 65o (15s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 65p: setstripe with yaml file and huge number ========================================================== 20:22:21 (1713486141) PASS 65p (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 65q: setstripe with >=8E offset should fail ========================================================== 20:22:25 (1713486145) lfs setstripe: cannot set default composite layout for '/mnt/lustre/d65q.sanity/src_dir': Invalid argument PASS 65q (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 66: update inode blocks count on client ========================================================================= 20:22:28 (1713486148) 8+0 records in 8+0 records out 8192 bytes (8.2 kB) copied, 0.00944033 s, 868 kB/s PASS 66 (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 69: verify oa2dentry return -ENOENT doesn't LBUG ================================================================ 20:22:33 (1713486153) directio on /mnt/lustre/f69.sanity.2 for 1x4194304 bytes PASS fail_loc=0x217 directio on /mnt/lustre/f69.sanity for 2x4194304 bytes Write error No such file or directory (rc = -1, len = 8388608) fail_loc=0 directio on /mnt/lustre/f69.sanity for 2x4194304 bytes PASS directio on /mnt/lustre/f69.sanity for 1x4194304 bytes PASS fail_loc=0x217 directio on /mnt/lustre/f69.sanity for 1x4194304 bytes Read error: No such file or directory rc = -1 fail_loc=0 PASS 69 (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 70a: verify health_check, health_write don't explode (on OST) ========================================================== 20:22:38 (1713486158) enable_health_write=off enable_health_write=0 enable_health_write=on enable_health_write=1 enable_health_write=0 PASS 70a (4s) debug_raw_pointers=0 debug_raw_pointers=0 SKIP: sanity test_71 skipping SLOW test 71 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 72a: Test that remove suid works properly (bug5695) ============================================================== 20:22:45 (1713486165) running as uid/gid/euid/egid 500/500/500/500, groups: [true] running as uid/gid/euid/egid 500/500/500/500, groups: [touch] [/mnt/lustre/d0_runas_test/f6927] running as uid/gid/euid/egid 500/500/500/500, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/f72a.sanity] [bs=512] [count=1] 1+0 records in 1+0 records out 512 bytes (512 B) copied, 0.00300488 s, 170 kB/s PASS 72a (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 72b: Test that we keep mode setting if without file data changed (bug 24226) ========================================================== 20:22:48 (1713486168) running as uid/gid/euid/egid 500/500/500/500, groups: [true] running as uid/gid/euid/egid 500/500/500/500, groups: [touch] [/mnt/lustre/d0_runas_test/f6927] running as uid/gid/euid/egid 500/500/500/500, groups: [chmod] [777] [/mnt/lustre/f72b.sanity-fg] chmod: changing permissions of '/mnt/lustre/f72b.sanity-fg': Operation not permitted running as uid/gid/euid/egid 500/500/500/500, groups: [chmod] [777] [/mnt/lustre/f72b.sanity-fu] chmod: changing permissions of '/mnt/lustre/f72b.sanity-fu': Operation not permitted running as uid/gid/euid/egid 500/500/500/500, groups: [chmod] [777] [/mnt/lustre/f72b.sanity-dg] chmod: changing permissions of '/mnt/lustre/f72b.sanity-dg': Operation not permitted running as uid/gid/euid/egid 500/500/500/500, groups: [chmod] [777] [/mnt/lustre/f72b.sanity-du] chmod: changing permissions of '/mnt/lustre/f72b.sanity-du': Operation not permitted running as uid/gid/euid/egid 500/500/500/500, groups: [chmod] [2777] [/mnt/lustre/f72b.sanity-fg] chmod: changing permissions of '/mnt/lustre/f72b.sanity-fg': Operation not permitted running as uid/gid/euid/egid 500/500/500/500, groups: [chmod] [2777] [/mnt/lustre/f72b.sanity-fu] chmod: changing permissions of '/mnt/lustre/f72b.sanity-fu': Operation not permitted running as uid/gid/euid/egid 500/500/500/500, groups: [chmod] [2777] [/mnt/lustre/f72b.sanity-dg] chmod: changing permissions of '/mnt/lustre/f72b.sanity-dg': Operation not permitted running as uid/gid/euid/egid 500/500/500/500, groups: [chmod] [2777] [/mnt/lustre/f72b.sanity-du] chmod: changing permissions of '/mnt/lustre/f72b.sanity-du': Operation not permitted running as uid/gid/euid/egid 500/500/500/500, groups: [chmod] [4777] [/mnt/lustre/f72b.sanity-fg] chmod: changing permissions of '/mnt/lustre/f72b.sanity-fg': Operation not permitted running as uid/gid/euid/egid 500/500/500/500, groups: [chmod] [4777] [/mnt/lustre/f72b.sanity-fu] chmod: changing permissions of '/mnt/lustre/f72b.sanity-fu': Operation not permitted running as uid/gid/euid/egid 500/500/500/500, groups: [chmod] [4777] [/mnt/lustre/f72b.sanity-dg] chmod: changing permissions of '/mnt/lustre/f72b.sanity-dg': Operation not permitted running as uid/gid/euid/egid 500/500/500/500, groups: [chmod] [4777] [/mnt/lustre/f72b.sanity-du] chmod: changing permissions of '/mnt/lustre/f72b.sanity-du': Operation not permitted PASS 72b (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 73: multiple MDC requests (should not deadlock) ========================================================== 20:22:52 (1713486172) multiop /mnt/lustre/d73-1/f73-1 vO_c TMPPIPE=/tmp/multiop_open_wait_pipe.6927 fail_loc=0x80000129 fail_loc=0 /mnt/lustre/d73-1/f73-1 has type file OK /mnt/lustre/d73-1/f73-2 has type file OK /mnt/lustre/d73-2/f73-3 has type file OK PASS 73 (29s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 74a: ldlm_enqueue freed-export error path, ls (shouldn't LBUG) ========================================================== 20:23:24 (1713486204) fail_loc=0x8000030e /mnt/lustre/f74a fail_loc=0 PASS 74a (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 74b: ldlm_enqueue freed-export error path, touch (shouldn't LBUG) ========================================================== 20:23:29 (1713486209) fail_loc=0x8000030e fail_loc=0 PASS 74b (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 74c: ldlm_lock_create error path, (shouldn't LBUG) ========================================================== 20:23:34 (1713486214) fail_loc=0x319 touch: cannot touch '/mnt/lustre/f74c.sanity': No such file or directory fail_loc=0 PASS 74c (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 76a: confirm clients recycle inodes properly ============================================================== 20:23:39 (1713486219) before slab objects: 41 created: 512, after slab objects: 41 PASS 76a (35s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 76b: confirm clients recycle directory inodes properly ============================================================== 20:24:16 (1713486256) slab objects before: 41, after: 41 PASS 76b (19s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 77a: normal checksum read/write operation ========================================================== 20:24:37 (1713486277) 8+0 records in 8+0 records out 8388608 bytes (8.4 MB) copied, 0.0841773 s, 99.7 MB/s 8+0 records in 8+0 records out 8388608 bytes (8.4 MB) copied, 0.259134 s, 32.4 MB/s PASS 77a (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 77b: checksum error on client write, read ========================================================== 20:24:41 (1713486281) fail_loc=0x80000409 8+0 records in 8+0 records out 8388608 bytes (8.4 MB) copied, 0.287643 s, 29.2 MB/s fail_loc=0 set checksum type to crc32, rc = 0 fail_loc=0x80000408 fail_loc=0 set checksum type to adler, rc = 0 fail_loc=0x80000408 fail_loc=0 set checksum type to crc32c, rc = 0 fail_loc=0x80000408 fail_loc=0 set checksum type to t10ip512, rc = 0 fail_loc=0x80000408 fail_loc=0 set checksum type to t10ip4K, rc = 0 fail_loc=0x80000408 fail_loc=0 set checksum type to t10crc512, rc = 0 fail_loc=0x80000408 fail_loc=0 set checksum type to t10crc4K, rc = 0 fail_loc=0x80000408 fail_loc=0 set checksum type to crc32c, rc = 0 PASS 77b (19s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 77c: checksum error on client read with debug ========================================================== 20:25:02 (1713486302) 8+0 records in 8+0 records out 8388608 bytes (8.4 MB) copied, 0.238073 s, 35.2 MB/s osc.lustre-OST0000-osc-ffff88012b4ca800.checksum_dump=1 osc.lustre-OST0001-osc-ffff88012b4ca800.checksum_dump=1 obdfilter.lustre-OST0000.checksum_dump=1 obdfilter.lustre-OST0001.checksum_dump=1 fail_loc=0x80000408 8+0 records in 8+0 records out 8388608 bytes (8.4 MB) copied, 1.38911 s, 6.0 MB/s fail_loc=0 osc.lustre-OST0000-osc-ffff88012b4ca800.checksum_dump=0 osc.lustre-OST0001-osc-ffff88012b4ca800.checksum_dump=0 obdfilter.lustre-OST0000.checksum_dump=0 obdfilter.lustre-OST0001.checksum_dump=0 PASS 77c (11s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 77d: checksum error on OST direct write, read ========================================================== 20:25:15 (1713486315) fail_loc=0x80000409 directio on /mnt/lustre/f77d.sanity for 8x1048576 bytes PASS fail_loc=0 fail_loc=0x80000408 directio on /mnt/lustre/f77d.sanity for 8x1048576 bytes PASS fail_loc=0 PASS 77d (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 77f: repeat checksum error on write (expect error) ========================================================== 20:25:21 (1713486321) set checksum type to crc32, rc = 0 fail_loc=0x409 directio on /mnt/lustre/f77f.sanity for 8x1048576 bytes Write error Input/output error (rc = -1, len = 8388608) fail_loc=0 set checksum type to adler, rc = 0 fail_loc=0x409 directio on /mnt/lustre/f77f.sanity for 8x1048576 bytes Write error Input/output error (rc = -1, len = 8388608) fail_loc=0 set checksum type to crc32c, rc = 0 fail_loc=0x409 directio on /mnt/lustre/f77f.sanity for 8x1048576 bytes Write error Input/output error (rc = -1, len = 8388608) fail_loc=0 set checksum type to t10ip512, rc = 0 fail_loc=0x409 directio on /mnt/lustre/f77f.sanity for 8x1048576 bytes Write error Input/output error (rc = -1, len = 8388608) fail_loc=0 set checksum type to t10ip4K, rc = 0 fail_loc=0x409 directio on /mnt/lustre/f77f.sanity for 8x1048576 bytes Write error Input/output error (rc = -1, len = 8388608) fail_loc=0 set checksum type to t10crc512, rc = 0 fail_loc=0x409 directio on /mnt/lustre/f77f.sanity for 8x1048576 bytes Write error Input/output error (rc = -1, len = 8388608) fail_loc=0 set checksum type to t10crc4K, rc = 0 fail_loc=0x409 directio on /mnt/lustre/f77f.sanity for 8x1048576 bytes Write error Input/output error (rc = -1, len = 8388608) fail_loc=0 set checksum type to crc32c, rc = 0 PASS 77f (395s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 77g: checksum error on OST write, read ==== 20:31:57 (1713486717) fail_loc=0x8000021a 8+0 records in 8+0 records out 8388608 bytes (8.4 MB) copied, 0.216012 s, 38.8 MB/s fail_loc=0 fail_loc=0x8000021b fail_loc=0 PASS 77g (5s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 77k: enable/disable checksum correctly ==== 20:32:04 (1713486724) remount client, checksum should be 0 Stopping client oleg216-client.virtnet /mnt/lustre (opts:) Starting client: oleg216-client.virtnet: -o user_xattr,flock oleg216-server@tcp:/lustre /mnt/lustre Waiting 90s for '1' remount client, checksum should be 1 Stopping client oleg216-client.virtnet /mnt/lustre (opts:) Starting client: oleg216-client.virtnet: -o user_xattr,flock oleg216-server@tcp:/lustre /mnt/lustre remount client with option checksum, checksum should be 1 192.168.202.116@tcp:/lustre /mnt/lustre lustre rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project 0 0 Stopping client oleg216-client.virtnet /mnt/lustre (opts:) Starting client: oleg216-client.virtnet: -o user_xattr,flock,checksum oleg216-server@tcp:/lustre /mnt/lustre remount client with option nochecksum, checksum should be 0 192.168.202.116@tcp:/lustre /mnt/lustre lustre rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project 0 0 Stopping client oleg216-client.virtnet /mnt/lustre (opts:) Starting client: oleg216-client.virtnet: -o user_xattr,flock,nochecksum oleg216-server@tcp:/lustre /mnt/lustre Stopping client oleg216-client.virtnet /mnt/lustre (opts:) Starting client: oleg216-client.virtnet: -o user_xattr,flock oleg216-server@tcp:/lustre /mnt/lustre Waiting 90s for '0' PASS 77k (6s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 77l: preferred checksum type is remembered after reconnected ========================================================== 20:32:11 (1713486731) osc.lustre-OST0000-osc-ffff880087a26800.idle_timeout=10 osc.lustre-OST0001-osc-ffff880087a26800.idle_timeout=10 error: set_param: setting /proc/fs/lustre/osc/lustre-OST0000-osc-ffff880087a26800/checksum_type=invalid: Invalid argument error: set_param: setting /proc/fs/lustre/osc/lustre-OST0001-osc-ffff880087a26800/checksum_type=invalid: Invalid argument error: set_param: setting 'osc/*osc-[^mM]*/checksum_type'='invalid': Invalid argument set checksum type to invalid, rc = 22 set checksum type to crc32, rc = 0 ldlm.namespaces.lustre-OST0000-osc-ffff880087a26800.lru_size=400 ldlm.namespaces.lustre-OST0001-osc-ffff880087a26800.lru_size=400 oleg216-client.virtnet: executing wait_import_state IDLE osc.lustre-OST0000-osc-ffff880087a26800.ost_server_uuid 50 osc.lustre-OST0000-osc-ffff880087a26800.ost_server_uuid in IDLE state after 5 sec oleg216-client.virtnet: executing wait_import_state FULL osc.lustre-OST0000-osc-ffff880087a26800.ost_server_uuid 50 osc.lustre-OST0000-osc-ffff880087a26800.ost_server_uuid in FULL state after 0 sec set checksum type to adler, rc = 0 ldlm.namespaces.lustre-OST0000-osc-ffff880087a26800.lru_size=400 ldlm.namespaces.lustre-OST0001-osc-ffff880087a26800.lru_size=400 oleg216-client.virtnet: executing wait_import_state IDLE osc.lustre-OST0000-osc-ffff880087a26800.ost_server_uuid 50 osc.lustre-OST0000-osc-ffff880087a26800.ost_server_uuid in IDLE state after 12 sec oleg216-client.virtnet: executing wait_import_state FULL osc.lustre-OST0000-osc-ffff880087a26800.ost_server_uuid 50 osc.lustre-OST0000-osc-ffff880087a26800.ost_server_uuid in FULL state after 0 sec set checksum type to crc32c, rc = 0 ldlm.namespaces.lustre-OST0000-osc-ffff880087a26800.lru_size=400 ldlm.namespaces.lustre-OST0001-osc-ffff880087a26800.lru_size=400 oleg216-client.virtnet: executing wait_import_state IDLE osc.lustre-OST0000-osc-ffff880087a26800.ost_server_uuid 50 osc.lustre-OST0000-osc-ffff880087a26800.ost_server_uuid in IDLE state after 13 sec oleg216-client.virtnet: executing wait_import_state FULL osc.lustre-OST0000-osc-ffff880087a26800.ost_server_uuid 50 osc.lustre-OST0000-osc-ffff880087a26800.ost_server_uuid in FULL state after 0 sec set checksum type to t10ip512, rc = 0 ldlm.namespaces.lustre-OST0000-osc-ffff880087a26800.lru_size=400 ldlm.namespaces.lustre-OST0001-osc-ffff880087a26800.lru_size=400 oleg216-client.virtnet: executing wait_import_state IDLE osc.lustre-OST0000-osc-ffff880087a26800.ost_server_uuid 50 osc.lustre-OST0000-osc-ffff880087a26800.ost_server_uuid in IDLE state after 12 sec oleg216-client.virtnet: executing wait_import_state FULL osc.lustre-OST0000-osc-ffff880087a26800.ost_server_uuid 50 osc.lustre-OST0000-osc-ffff880087a26800.ost_server_uuid in FULL state after 0 sec set checksum type to t10ip4K, rc = 0 ldlm.namespaces.lustre-OST0000-osc-ffff880087a26800.lru_size=400 ldlm.namespaces.lustre-OST0001-osc-ffff880087a26800.lru_size=400 oleg216-client.virtnet: executing wait_import_state IDLE osc.lustre-OST0000-osc-ffff880087a26800.ost_server_uuid 50 osc.lustre-OST0000-osc-ffff880087a26800.ost_server_uuid in IDLE state after 13 sec oleg216-client.virtnet: executing wait_import_state FULL osc.lustre-OST0000-osc-ffff880087a26800.ost_server_uuid 50 osc.lustre-OST0000-osc-ffff880087a26800.ost_server_uuid in FULL state after 0 sec set checksum type to t10crc512, rc = 0 ldlm.namespaces.lustre-OST0000-osc-ffff880087a26800.lru_size=400 ldlm.namespaces.lustre-OST0001-osc-ffff880087a26800.lru_size=400 oleg216-client.virtnet: executing wait_import_state IDLE osc.lustre-OST0000-osc-ffff880087a26800.ost_server_uuid 50 osc.lustre-OST0000-osc-ffff880087a26800.ost_server_uuid in IDLE state after 13 sec oleg216-client.virtnet: executing wait_import_state FULL osc.lustre-OST0000-osc-ffff880087a26800.ost_server_uuid 50 osc.lustre-OST0000-osc-ffff880087a26800.ost_server_uuid in FULL state after 0 sec set checksum type to t10crc4K, rc = 0 ldlm.namespaces.lustre-OST0000-osc-ffff880087a26800.lru_size=400 ldlm.namespaces.lustre-OST0001-osc-ffff880087a26800.lru_size=400 oleg216-client.virtnet: executing wait_import_state IDLE osc.lustre-OST0000-osc-ffff880087a26800.ost_server_uuid 50 osc.lustre-OST0000-osc-ffff880087a26800.ost_server_uuid in IDLE state after 12 sec oleg216-client.virtnet: executing wait_import_state FULL osc.lustre-OST0000-osc-ffff880087a26800.ost_server_uuid 50 osc.lustre-OST0000-osc-ffff880087a26800.ost_server_uuid in FULL state after 0 sec osc.lustre-OST0000-osc-ffff880087a26800.idle_timeout=20 osc.lustre-OST0001-osc-ffff880087a26800.idle_timeout=20 set checksum type to crc32c, rc = 0 PASS 77l (100s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 77m: Verify checksum_speed is correctly read ========================================================== 20:33:52 (1713486832) checksum_speed= adler32: 1423 crc32: 1687 crc32c: 10051 PASS 77m (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 77n: Verify read from a hole inside contiguous blocks with T10PI ========================================================== 20:33:55 (1713486835) 1+0 records in 1+0 records out 4096 bytes (4.1 kB) copied, 0.00390605 s, 1.0 MB/s 1+0 records in 1+0 records out 4096 bytes (4.1 kB) copied, 0.00128718 s, 3.2 MB/s /mnt/lustre/f77n.sanity: FIBMAP unsupported SKIP: sanity test_77n f77n.sanity blocks not contiguous around hole SKIP 77n (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 77o: Verify checksum_type for server (mdt and ofd(obdfilter)) ========================================================== 20:33:58 (1713486838) obdfilter.lustre-*.checksum_type: crc32 adler [crc32c] t10ip512 t10ip4K t10crc512 t10crc4K crc32 adler [crc32c] t10ip512 t10ip4K t10crc512 t10crc4K mdt.lustre-*.checksum_type: crc32 adler [crc32c] t10ip512 t10ip4K t10crc512 t10crc4K PASS 77o (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 78: handle large O_DIRECT writes correctly ====================================================================== 20:34:02 (1713486842) MemFree: 3319, Max file size: 600000 MemTotal: 3730 Mem to use for directio: 1737 Smallest OST: 3765248 File size: 32 directIO rdwr round 1 of 1 directio on /mnt/lustre/f78.sanity for 32x1048576 bytes PASS PASS 78 (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 79: df report consistency check ================================================================================= 20:34:06 (1713486846) sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 79 (19s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 80: Page eviction is equally fast at high offsets too ========================================================== 20:34:27 (1713486867) 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0272201 s, 38.5 MB/s PASS 80 (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 81a: OST should retry write when get -ENOSPC ========================================================================= 20:34:31 (1713486871) fail_loc=0x80000228 PASS 81a (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 81b: OST should return -ENOSPC when retry still fails ================================================================= 20:34:34 (1713486874) fail_loc=0x228 write: No space left on device PASS 81b (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 99: cvs strange file/directory operations ========================================================== 20:34:38 (1713486878) running as uid/gid/euid/egid 500/500/500/500, groups: [cvs] [-d] [/mnt/lustre/d99.sanity.cvsroot] [init] running as uid/gid/euid/egid 500/500/500/500, groups: [cvs] [-d] [/mnt/lustre/d99.sanity.cvsroot] [import] [-m] [nomesg] [d99.sanity.reposname] [vtag] [rtag] N d99.sanity.reposname/README N d99.sanity.reposname/network N d99.sanity.reposname/netconsole N d99.sanity.reposname/functions No conflicts created by this import running as uid/gid/euid/egid 500/500/500/500, groups: [cvs] [-d] [/mnt/lustre/d99.sanity.cvsroot] [co] [d99.sanity.reposname] cvs checkout: Updating d99.sanity.reposname U d99.sanity.reposname/README U d99.sanity.reposname/functions U d99.sanity.reposname/netconsole U d99.sanity.reposname/network running as uid/gid/euid/egid 500/500/500/500, groups: [touch] [foo99] running as uid/gid/euid/egid 500/500/500/500, groups: [cvs] [add] [-m] [addmsg] [foo99] cvs add: scheduling file `foo99' for addition cvs add: use 'cvs commit' to add this file permanently running as uid/gid/euid/egid 500/500/500/500, groups: [cvs] [update] cvs update: Updating . A foo99 running as uid/gid/euid/egid 500/500/500/500, groups: [cvs] [commit] [-m] [nomsg] [foo99] RCS file: /mnt/lustre/d99.sanity.cvsroot/d99.sanity.reposname/foo99,v done Checking in foo99; /mnt/lustre/d99.sanity.cvsroot/d99.sanity.reposname/foo99,v <-- foo99 initial revision: 1.1 done PASS 99 (5s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 100: check local port using privileged port ========================================================== 20:34:44 (1713486884) PASS 100 (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 101a: check read-ahead for random reads === 20:34:48 (1713486888) nreads: 10000 file size: 96MB 27.967732s, 23.4327MB/s osc.lustre-OST0000-osc-ffff880087a26800.rpc_stats= snapshot_time: 1713486919.806738991 secs.nsecs start_time: 1713486888.373447300 secs.nsecs elapsed_time: 31.433291691 secs.nsecs read RPCs in flight: 0 write RPCs in flight: 0 pending write pages: 0 pending read pages: 0 read write pages per rpc rpcs % cum % | rpcs % cum % 1: 0 0 0 | 2 50 50 2: 0 0 0 | 1 25 75 4: 0 0 0 | 0 0 75 8: 0 0 0 | 1 25 100 read write rpcs in flight rpcs % cum % | rpcs % cum % 1: 0 0 0 | 1 25 25 2: 0 0 0 | 1 25 50 3: 0 0 0 | 1 25 75 4: 0 0 0 | 1 25 100 read write offset rpcs % cum % | rpcs % cum % 0: 0 0 0 | 4 100 100 osc.lustre-OST0001-osc-ffff880087a26800.rpc_stats= snapshot_time: 1713486919.806860146 secs.nsecs start_time: 1713486888.373531667 secs.nsecs elapsed_time: 31.433328479 secs.nsecs read RPCs in flight: 0 write RPCs in flight: 0 pending write pages: 192 pending read pages: 0 read write pages per rpc rpcs % cum % | rpcs % cum % 1: 0 0 0 | 2 1 1 2: 2 0 0 | 1 0 2 4: 1 0 0 | 0 0 2 8: 4 0 0 | 0 0 2 16: 4992 99 100 | 0 0 2 32: 0 0 100 | 0 0 2 64: 0 0 100 | 0 0 2 128: 0 0 100 | 0 0 2 256: 0 0 100 | 99 97 100 read write rpcs in flight rpcs % cum % | rpcs % cum % 1: 4999 100 100 | 99 97 97 2: 0 0 100 | 2 1 99 3: 0 0 100 | 1 0 100 read write offset rpcs % cum % | rpcs % cum % 0: 3 0 0 | 4 3 3 1: 0 0 0 | 0 0 3 2: 0 0 0 | 0 0 3 4: 0 0 0 | 0 0 3 8: 0 0 0 | 0 0 3 16: 5 0 0 | 0 0 3 32: 7 0 0 | 0 0 3 64: 16 0 0 | 0 0 3 128: 26 0 1 | 0 0 3 256: 52 1 2 | 1 0 4 512: 107 2 4 | 2 1 6 1024: 203 4 8 | 4 3 10 2048: 431 8 17 | 8 7 18 4096: 860 17 34 | 16 15 34 8192: 1671 33 67 | 32 31 65 16384: 1618 32 100 | 35 34 100 llite.lustre-ffff880087a26800.read_ahead_stats= snapshot_time 1713486919.810972313 secs.nsecs start_time 1713486888.376444413 secs.nsecs elapsed_time 31.434527900 secs.nsecs hits 74854 samples [pages] misses 4999 samples [pages] readpage_not_consecutive 9991 samples [pages] zero_size_window 74854 samples [pages] failed_to_fast_read 4999 samples [pages] readahead_pages 4999 samples [pages] 1 15 74854 PASS 101a (34s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 101b: check stride-io mode read-ahead =========================================================================== 20:35:22 (1713486922) oleg216-server: error: set_param: param_path 'osd-*/lustre-OST*/read_cache_enable': No such file or directory oleg216-server: error: set_param: setting 'osd-*/lustre-OST*/read_cache_enable'='0': No such file or directory pdsh@oleg216-client: oleg216-server: ssh exited with exit code 2 oleg216-server: error: set_param: param_path 'osd-*/lustre-OST*/writethrough_cache_enable': No such file or directory oleg216-server: error: set_param: setting 'osd-*/lustre-OST*/writethrough_cache_enable'='0': No such file or directory pdsh@oleg216-client: oleg216-server: ssh exited with exit code 2 4.126188s, 2.03302MB/s Read-ahead success for size 8192 3.928651s, 2.13524MB/s Read-ahead success for size 16384 3.523855s, 2.38052MB/s Read-ahead success for size 32768 3.324247s, 2.52346MB/s Read-ahead success for size 65536 3.246454s, 2.58393MB/s Read-ahead success for size 131072 3.238750s, 2.59008MB/s Read-ahead success for size 262144 3.574136s, 2.34703MB/s Read-ahead success for size 524288 3.126877s, 2.68274MB/s Read-ahead success for size 1048576 oleg216-server: error: set_param: param_path 'osd-*/lustre-OST*/read_cache_enable': No such file or directory oleg216-server: error: set_param: setting 'osd-*/lustre-OST*/read_cache_enable'='1': No such file or directory pdsh@oleg216-client: oleg216-server: ssh exited with exit code 2 oleg216-server: error: set_param: param_path 'osd-*/lustre-OST*/writethrough_cache_enable': No such file or directory oleg216-server: error: set_param: setting 'osd-*/lustre-OST*/writethrough_cache_enable'='1': No such file or directory pdsh@oleg216-client: oleg216-server: ssh exited with exit code 2 PASS 101b (33s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 101c: check stripe_size aligned read-ahead ========================================================== 20:35:56 (1713486956) oleg216-server: error: set_param: param_path 'osd-*/lustre-OST*/read_cache_enable': No such file or directory oleg216-server: error: set_param: setting 'osd-*/lustre-OST*/read_cache_enable'='0': No such file or directory pdsh@oleg216-client: oleg216-server: ssh exited with exit code 2 oleg216-server: error: set_param: param_path 'osd-*/lustre-OST*/writethrough_cache_enable': No such file or directory oleg216-server: error: set_param: setting 'osd-*/lustre-OST*/writethrough_cache_enable'='0': No such file or directory pdsh@oleg216-client: oleg216-server: ssh exited with exit code 2 osc.lustre-OST0000-osc-ffff880087a26800.rpc_stats=0 osc.lustre-OST0001-osc-ffff880087a26800.rpc_stats=0 8.995039s, 72.8579MB/s osc.lustre-OST0000-osc-ffff880087a26800.rpc_stats= snapshot_time: 1713486970.007095194 secs.nsecs start_time: 1713486960.998524014 secs.nsecs elapsed_time: 9.008571180 secs.nsecs read RPCs in flight: 0 write RPCs in flight: 0 pending write pages: 0 pending read pages: 0 read write pages per rpc rpcs % cum % | rpcs % cum % 1: 0 0 0 | 0 0 0 2: 0 0 0 | 0 0 0 4: 0 0 0 | 0 0 0 8: 0 0 0 | 0 0 0 16: 798 100 100 | 0 0 0 read write rpcs in flight rpcs % cum % | rpcs % cum % 1: 798 100 100 | 0 0 0 read write offset rpcs % cum % | rpcs % cum % 0: 1 0 0 | 0 0 0 1: 0 0 0 | 0 0 0 2: 0 0 0 | 0 0 0 4: 0 0 0 | 0 0 0 8: 0 0 0 | 0 0 0 16: 1 0 0 | 0 0 0 32: 2 0 0 | 0 0 0 64: 4 0 1 | 0 0 0 128: 8 1 2 | 0 0 0 256: 16 2 4 | 0 0 0 512: 32 4 8 | 0 0 0 1024: 63 7 15 | 0 0 0 2048: 128 16 31 | 0 0 0 4096: 255 31 63 | 0 0 0 8192: 288 36 100 | 0 0 0 osc.lustre-OST0001-osc-ffff880087a26800.rpc_stats= snapshot_time: 1713486970.007249734 secs.nsecs start_time: 1713486960.998630228 secs.nsecs elapsed_time: 9.008619506 secs.nsecs read RPCs in flight: 0 write RPCs in flight: 0 pending write pages: 0 pending read pages: 0 read write pages per rpc rpcs % cum % | rpcs % cum % 1: 0 0 0 | 0 0 0 2: 0 0 0 | 0 0 0 4: 0 0 0 | 0 0 0 8: 0 0 0 | 0 0 0 16: 799 100 100 | 0 0 0 read write rpcs in flight rpcs % cum % | rpcs % cum % 1: 799 100 100 | 0 0 0 read write offset rpcs % cum % | rpcs % cum % 0: 1 0 0 | 0 0 0 1: 0 0 0 | 0 0 0 2: 0 0 0 | 0 0 0 4: 0 0 0 | 0 0 0 8: 0 0 0 | 0 0 0 16: 1 0 0 | 0 0 0 32: 2 0 0 | 0 0 0 64: 4 0 1 | 0 0 0 128: 8 1 2 | 0 0 0 256: 16 2 4 | 0 0 0 512: 32 4 8 | 0 0 0 1024: 64 8 16 | 0 0 0 2048: 128 16 32 | 0 0 0 4096: 255 31 63 | 0 0 0 8192: 288 36 100 | 0 0 0 osc.lustre-OST0000-osc-ffff880087a26800.rpc_stats check passed! osc.lustre-OST0001-osc-ffff880087a26800.rpc_stats check passed! oleg216-server: error: set_param: param_path 'osd-*/lustre-OST*/read_cache_enable': No such file or directory oleg216-server: error: set_param: setting 'osd-*/lustre-OST*/read_cache_enable'='1': No such file or directory pdsh@oleg216-client: oleg216-server: ssh exited with exit code 2 oleg216-server: error: set_param: param_path 'osd-*/lustre-OST*/writethrough_cache_enable': No such file or directory oleg216-server: error: set_param: setting 'osd-*/lustre-OST*/writethrough_cache_enable'='1': No such file or directory pdsh@oleg216-client: oleg216-server: ssh exited with exit code 2 PASS 101c (16s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 101d: file read with and without read-ahead enabled ========================================================== 20:36:13 (1713486973) Create test file /mnt/lustre/f101d.sanity size 80M, 7255M free 80+0 records in 80+0 records out 83886080 bytes (84 MB) copied, 1.90338 s, 44.1 MB/s Cancel LRU locks on lustre client to flush the client cache Disable read-ahead 0 Reading the test file /mnt/lustre/f101d.sanity with read-ahead disabled Cancel LRU locks on lustre client to flush the client cache Enable read-ahead with 40MB Reading the test file /mnt/lustre/f101d.sanity with read-ahead enabled read-ahead disabled time read '39.9179' read-ahead enabled time read '14.3496' sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 101d (77s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 101e: check read-ahead for small read(1k) for small files(500k) ========================================================== 20:37:31 (1713487051) Creating 100 500K test files Cancel LRU locks on lustre client to flush the client cache Reset readahead stats llite.lustre-ffff880087a26800.max_cached_mb= users: 5 max_cached_mb: 1865 used_mb: 49 unused_mb: 1816 reclaim_count: 0 max_read_ahead_mb: 256 used_read_ahead_mb: 0 llite.lustre-ffff880087a26800.read_ahead_stats= snapshot_time 1713487070.015620899 secs.nsecs start_time 1713487064.051562010 secs.nsecs elapsed_time 5.964058889 secs.nsecs hits 12300 samples [pages] misses 200 samples [pages] zero_size_window 100 samples [pages] failed_to_fast_read 200 samples [pages] readahead_pages 100 samples [pages] 123 123 12300 PASS 101e (21s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 101f: check mmap read performance ========= 20:37:54 (1713487074) /opt/iozone/bin/iozone debug=reada mmap Cancel LRU locks on lustre client to flush the client cache Reset readahead stats mmap read the file with small block size checking missing pages llite.lustre-ffff880087a26800.read_ahead_stats= snapshot_time 1713487075.362140384 secs.nsecs start_time 1713487075.270907348 secs.nsecs elapsed_time 0.091233036 secs.nsecs debug=trace inode super iotrace malloc cache info ioctl neterror net warning buffs other dentry nettrace page dlmtrace error emerg ha rpctrace vfstrace reada mmap config console quota sec lfsck hsm snapshot layout PASS 101f (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 101g: Big bulk(4/16 MiB) readahead ======== 20:37:58 (1713487078) remount client to enable new RPC size Stopping client oleg216-client.virtnet /mnt/lustre (opts:) Starting client: oleg216-client.virtnet: -o user_xattr,flock oleg216-server@tcp:/lustre /mnt/lustre osc.lustre-OST0000-osc-ffff88012b4ce800.max_pages_per_rpc=16M osc.lustre-OST0001-osc-ffff88012b4ce800.max_pages_per_rpc=16M 10+0 records in 10+0 records out 167772160 bytes (168 MB) copied, 4.45276 s, 37.7 MB/s 10+0 records in 10+0 records out 167772160 bytes (168 MB) copied, 3.6545 s, 45.9 MB/s osc.lustre-OST0000-osc-ffff88012b4ce800.max_pages_per_rpc=8M osc.lustre-OST0001-osc-ffff88012b4ce800.max_pages_per_rpc=8M 10+0 records in 10+0 records out 83886080 bytes (84 MB) copied, 2.22777 s, 37.7 MB/s 10+0 records in 10+0 records out 83886080 bytes (84 MB) copied, 2.60584 s, 32.2 MB/s osc.lustre-OST0000-osc-ffff88012b4ce800.max_pages_per_rpc=4M osc.lustre-OST0001-osc-ffff88012b4ce800.max_pages_per_rpc=4M 10+0 records in 10+0 records out 41943040 bytes (42 MB) copied, 1.06113 s, 39.5 MB/s 10+0 records in 10+0 records out 41943040 bytes (42 MB) copied, 0.965223 s, 43.5 MB/s Stopping client oleg216-client.virtnet /mnt/lustre (opts:) Starting client: oleg216-client.virtnet: -o user_xattr,flock oleg216-server@tcp:/lustre /mnt/lustre PASS 101g (25s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 101h: Readahead should cover current read window ========================================================== 20:38:25 (1713487105) 70+0 records in 70+0 records out 73400320 bytes (73 MB) copied, 1.92396 s, 38.2 MB/s Cancel LRU locks on lustre client to flush the client cache Reset readahead stats Read 10M of data but cross 64M bundary 1+0 records in 1+0 records out 10485760 bytes (10 MB) copied, 0.30828 s, 34.0 MB/s PASS 101h (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 101i: allow current readahead to exceed reservation ========================================================== 20:38:31 (1713487111) 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 0.259741 s, 40.4 MB/s llite.lustre-ffff8800aaaac800.max_read_ahead_per_file_mb=1 Reset readahead stats llite.lustre-ffff8800aaaac800.read_ahead_stats=0 5+0 records in 5+0 records out 10485760 bytes (10 MB) copied, 0.459216 s, 22.8 MB/s llite.lustre-ffff8800aaaac800.read_ahead_stats= snapshot_time 1713487112.789008369 secs.nsecs start_time 1713487112.317597927 secs.nsecs elapsed_time 0.471410442 secs.nsecs hits 2555 samples [pages] misses 5 samples [pages] zero_size_window 2555 samples [pages] failed_to_fast_read 6 samples [pages] readahead_pages 5 samples [pages] 511 511 2555 llite.lustre-ffff8800aaaac800.max_read_ahead_per_file_mb=64 PASS 101i (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 101j: A complete read block should be submitted when no RA ========================================================== 20:38:35 (1713487115) Disable read-ahead 16+0 records in 16+0 records out 16777216 bytes (17 MB) copied, 0.416025 s, 40.3 MB/s Reset readahead stats 4096+0 records in 4096+0 records out 16777216 bytes (17 MB) copied, 8.84186 s, 1.9 MB/s snapshot_time 1713487125.531878201 secs.nsecs start_time 1713487116.668667868 secs.nsecs elapsed_time 8.863210333 secs.nsecs failed_to_fast_read 4096 samples [pages] Reset readahead stats 16+0 records in 16+0 records out 16777216 bytes (17 MB) copied, 0.429765 s, 39.0 MB/s snapshot_time 1713487126.161033973 secs.nsecs start_time 1713487125.711892799 secs.nsecs elapsed_time 0.449141174 secs.nsecs failed_to_fast_read 16 samples [pages] Reset readahead stats 1+0 records in 1+0 records out 16777216 bytes (17 MB) copied, 0.259755 s, 64.6 MB/s snapshot_time 1713487126.616890781 secs.nsecs start_time 1713487126.335634677 secs.nsecs elapsed_time 0.281256104 secs.nsecs failed_to_fast_read 1 samples [pages] PASS 101j (12s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 101m: read ahead for small file and last stripe of the file ========================================================== 20:38:49 (1713487129) fallocate on zfs doesn't consume space fallocate not supported SKIP: sanity test_101m need >= 2.13.57 and ldiskfs for fallocate SKIP 101m (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 102a: user xattr test ============================================================================================ 20:38:51 (1713487131) set/get xattr... trusted.name1="value1" user.author1="author1" listxattr... remove xattr... set lustre special xattr ... lfs setstripe: setstripe error for '/mnt/lustre/f102a.sanity': stripe already set getfattr: Removing leading '/' from absolute path names setfattr: /mnt/lustre/f102a.sanity: Numerical result out of range PASS 102a (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 102b: getfattr/setfattr for trusted.lov EAs ========================================================== 20:38:54 (1713487134) test layout '-S 65536 -i 1 -c 2' lmm_stripe_count: 2 lmm_stripe_size: 65536 lmm_pattern: raid0 lmm_layout_gen: 0 lmm_stripe_offset: 1 lmm_objects: - l_ost_idx: 1 l_fid: 0x280000400:0xf1a:0x0 - l_ost_idx: 0 l_fid: 0x240000400:0xf2b:0x0 get/set/list trusted.lov xattr ... getfattr: Removing leading '/' from absolute path names setfattr 4 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 6 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 8 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 10 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 12 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 14 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 16 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 18 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 20 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 22 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 24 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 26 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 28 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 30 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 32 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 34 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 36 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 38 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 40 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 42 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 44 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 46 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 48 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 50 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 52 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 54 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 56 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 58 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 60 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 62 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 64 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 66 /mnt/lustre/f102b.sanity.2 setfattr 68 /mnt/lustre/f102b.sanity.2 setfattr 70 /mnt/lustre/f102b.sanity.2 setfattr 72 /mnt/lustre/f102b.sanity.2 setfattr 74 /mnt/lustre/f102b.sanity.2 setfattr 76 /mnt/lustre/f102b.sanity.2 setfattr 78 /mnt/lustre/f102b.sanity.2 setfattr 80 /mnt/lustre/f102b.sanity.2 setfattr 82 /mnt/lustre/f102b.sanity.2 setfattr 84 /mnt/lustre/f102b.sanity.2 setfattr 86 /mnt/lustre/f102b.sanity.2 setfattr 88 /mnt/lustre/f102b.sanity.2 setfattr 90 /mnt/lustre/f102b.sanity.2 setfattr 92 /mnt/lustre/f102b.sanity.2 setfattr 94 /mnt/lustre/f102b.sanity.2 setfattr 96 /mnt/lustre/f102b.sanity.2 setfattr 98 /mnt/lustre/f102b.sanity.2 setfattr 100 /mnt/lustre/f102b.sanity.2 setfattr 102 /mnt/lustre/f102b.sanity.2 setfattr 104 /mnt/lustre/f102b.sanity.2 setfattr 106 /mnt/lustre/f102b.sanity.2 setfattr 108 /mnt/lustre/f102b.sanity.2 setfattr 110 /mnt/lustre/f102b.sanity.2 setfattr 112 /mnt/lustre/f102b.sanity.2 setfattr 114 /mnt/lustre/f102b.sanity.2 setfattr 116 /mnt/lustre/f102b.sanity.2 setfattr 118 /mnt/lustre/f102b.sanity.2 setfattr 120 /mnt/lustre/f102b.sanity.2 setfattr 122 /mnt/lustre/f102b.sanity.2 setfattr 124 /mnt/lustre/f102b.sanity.2 setfattr 126 /mnt/lustre/f102b.sanity.2 setfattr 128 /mnt/lustre/f102b.sanity.2 setfattr 130 /mnt/lustre/f102b.sanity.2 setfattr 132 /mnt/lustre/f102b.sanity.2 setfattr 134 /mnt/lustre/f102b.sanity.2 setfattr 136 /mnt/lustre/f102b.sanity.2 setfattr 138 /mnt/lustre/f102b.sanity.2 setfattr 140 /mnt/lustre/f102b.sanity.2 setfattr 142 /mnt/lustre/f102b.sanity.2 setfattr 144 /mnt/lustre/f102b.sanity.2 setfattr 146 /mnt/lustre/f102b.sanity.2 setfattr 148 /mnt/lustre/f102b.sanity.2 setfattr 150 /mnt/lustre/f102b.sanity.2 setfattr 152 /mnt/lustre/f102b.sanity.2 setfattr 154 /mnt/lustre/f102b.sanity.2 setfattr 156 /mnt/lustre/f102b.sanity.2 setfattr 158 /mnt/lustre/f102b.sanity.2 setfattr 160 /mnt/lustre/f102b.sanity.2 setfattr 162 /mnt/lustre/f102b.sanity.2 test layout '-E 1M -S 65536 -i 1 -c 2 -Eeof -S4M' lcm_layout_gen: 2 lcm_mirror_count: 1 lcm_entry_count: 2 component0: lcme_id: 1 lcme_mirror_id: 0 lcme_flags: init lcme_extent.e_start: 0 lcme_extent.e_end: 1048576 sub_layout: lmm_stripe_count: 2 lmm_stripe_size: 65536 lmm_pattern: raid0 lmm_layout_gen: 0 lmm_stripe_offset: 1 lmm_objects: - l_ost_idx: 1 l_fid: 0x280000400:0xf1c:0x0 - l_ost_idx: 0 l_fid: 0x240000400:0xf2d:0x0 component1: lcme_id: 2 lcme_mirror_id: 0 lcme_flags: 0 lcme_extent.e_start: 1048576 lcme_extent.e_end: EOF sub_layout: lmm_stripe_count: 2 lmm_stripe_size: 4194304 lmm_pattern: raid0 lmm_layout_gen: 0 lmm_stripe_offset: -1 get/set/list trusted.lov xattr ... getfattr: Removing leading '/' from absolute path names setfattr 4 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 6 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 8 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 10 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 12 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 14 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 16 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 18 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 20 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 22 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 24 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 26 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 28 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 30 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 32 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 34 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 36 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 38 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 40 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 42 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 44 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 46 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 48 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 50 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 52 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 54 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 56 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 58 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 60 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 62 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 64 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 66 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 68 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 70 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 72 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 74 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 76 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 78 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 80 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 82 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 84 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 86 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 88 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 90 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 92 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 94 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 96 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 98 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 100 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 102 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 104 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 106 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 108 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 110 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 112 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 114 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 116 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 118 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 120 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 122 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 124 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 126 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 128 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 130 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 132 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 134 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 136 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 138 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 140 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 142 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 144 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 146 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 148 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 150 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 152 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 154 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 156 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 158 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 160 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 162 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 164 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 166 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 168 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 170 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 172 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 174 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 176 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 178 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 180 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 182 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 184 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 186 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 188 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 190 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 192 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 194 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 196 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 198 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 200 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 202 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 204 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 206 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 208 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 210 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 212 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 214 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 216 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 218 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 220 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 222 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 224 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 226 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 228 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 230 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 232 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 234 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 236 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 238 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 240 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 242 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 244 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 246 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 248 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 250 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 252 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 254 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 256 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 258 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 260 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 262 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 264 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 266 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 268 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 270 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 272 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 274 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 276 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 278 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 280 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 282 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 284 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 286 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 288 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 290 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 292 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 294 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 296 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 298 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 300 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 302 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 304 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 306 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 308 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 310 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 312 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 314 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 316 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 318 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 320 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 322 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 324 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 326 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 328 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 330 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 332 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 334 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 336 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 338 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 340 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 342 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 344 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 346 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 348 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 350 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 352 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 354 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 356 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 358 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 360 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 362 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 364 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 366 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 368 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 370 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 372 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 374 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 376 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 378 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 380 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 382 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 384 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 386 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 388 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 390 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 392 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 394 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 396 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 398 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 400 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 402 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 404 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 406 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 408 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 410 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 412 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 414 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 416 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 418 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 420 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 422 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 424 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 426 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 428 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 430 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 432 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 434 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 436 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 438 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 440 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 442 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 444 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 446 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 448 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 450 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 452 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 454 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 456 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 458 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 460 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 462 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 464 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 466 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 468 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 470 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 472 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 474 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 476 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 478 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 480 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 482 /mnt/lustre/f102b.sanity.2 PASS 102b (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 102c: non-root getfattr/setfattr for lustre.lov EAs ===================================================================== 20:38:59 (1713487139) get/set/list lustre.lov xattr ... running as uid/gid/euid/egid 500/500/500/500, groups: [/home/green/git/lustre-release/lustre/utils/lfs] [setstripe] [-S] [65536] [-i] [1] [-c] [2] [/mnt/lustre/d102c.sanity/f102c.sanity] running as uid/gid/euid/egid 500/500/500/500, groups: [/home/green/git/lustre-release/lustre/utils/lfs] [getstripe] [-c] [/mnt/lustre/d102c.sanity/f102c.sanity] lustre.lov=0s0AvRCwEAAAAMAAAAAAAAAAkEAAACAAAAAAABAAIAAAAABACAAgAAAB4PAAAAAAAAAAAAAAEAAAAABABAAgAAAC8PAAAAAAAAAAAAAAAAAAA= running as uid/gid/euid/egid 500/500/500/500, groups: [mcreate] [/mnt/lustre/d102c.sanity/f102c.sanity2] running as uid/gid/euid/egid 500/500/500/500, groups: [setfattr] [-n] [lustre.lov] [-v] [0s0AvRCwEAAAAMAAAAAAAAAAkEAAACAAAAAAABAAIAAAAABACAAgAAAB4PAAAAAAAAAAAAAAEAAAAABABAAgAAAC8PAAAAAAAAAAAAAAAAAAA=] [/mnt/lustre/d102c.sanity/f102c.sanity2] running as uid/gid/euid/egid 500/500/500/500, groups: [/home/green/git/lustre-release/lustre/utils/lfs] [getstripe] [-S] [/mnt/lustre/d102c.sanity/f102c.sanity2] running as uid/gid/euid/egid 500/500/500/500, groups: [/home/green/git/lustre-release/lustre/utils/lfs] [getstripe] [-c] [/mnt/lustre/d102c.sanity/f102c.sanity2] PASS 102c (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 102d: tar restore stripe info from tarfile,not keep osts ========================================================== 20:39:02 (1713487142) PASS 102d (5s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 102f: tar copy files, not keep osts ======= 20:39:08 (1713487148) PASS 102f (5s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 102h: grow xattr from inside inode to external block ========================================================== 20:39:15 (1713487155) save trusted.big on /mnt/lustre/f102h.sanity save trusted.sml on /mnt/lustre/f102h.sanity grow trusted.sml on /mnt/lustre/f102h.sanity trusted.big still valid after growing trusted.sml PASS 102h (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 102ha: grow xattr from inside inode to external inode ========================================================== 20:39:20 (1713487160) setting xattr of max xattr size: 65536 save trusted.big on /mnt/lustre/f102ha.sanity save trusted.sml on /mnt/lustre/f102ha.sanity grow trusted.sml on /mnt/lustre/f102ha.sanity trusted.big still valid after growing trusted.sml setting xattr of > max xattr size: 65536 + 10 This should fail: save trusted.big on /mnt/lustre/f102ha.sanity setfattr: /mnt/lustre/f102ha.sanity: Argument list too long PASS 102ha (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 102i: lgetxattr test on symbolic link ====================================================================== 20:39:25 (1713487165) getfattr: Removing leading '/' from absolute path names # file: mnt/lustre/f102i.sanity trusted.lov=0s0AvRCwEAAAB2AAAAAAAAAAkEAAACAAAAAABAAAEAAAAABABAAgAAAGIPAAAAAAAAAAAAAAAAAAA= /mnt/lustre/f102i.sanitylink: trusted.lov: No such attribute PASS 102i (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 102j: non-root tar restore stripe info from tarfile, not keep osts ============================================================= 20:39:29 (1713487169) running as uid/gid/euid/egid 500/500/500/500, groups: [/home/green/git/lustre-release/lustre/utils/lfs] [setstripe] [-S] [65536] [-i] [1] [-c] [2] [d102j.sanity] running as uid/gid/euid/egid 500/500/500/500, groups: [/home/green/git/lustre-release/lustre/utils/lfs] [setstripe] [-S] [65536] [-i] [0] [-c] [1] [file1-0-1] running as uid/gid/euid/egid 500/500/500/500, groups: [/home/green/git/lustre-release/lustre/utils/lfs] [setstripe] [-S] [65536] [-i] [1] [-c] [1] [file1-1-1] running as uid/gid/euid/egid 500/500/500/500, groups: [/home/green/git/lustre-release/lustre/utils/lfs] [setstripe] [-S] [65536] [-i] [0] [-c] [2] [file1-0-2] running as uid/gid/euid/egid 500/500/500/500, groups: [/home/green/git/lustre-release/lustre/utils/lfs] [setstripe] [-S] [65536] [-i] [1] [-c] [2] [file1-1-2] running as uid/gid/euid/egid 500/500/500/500, groups: [/home/green/git/lustre-release/lustre/utils/lfs] [setstripe] [-S] [131072] [-i] [0] [-c] [1] [file2-0-1] running as uid/gid/euid/egid 500/500/500/500, groups: [/home/green/git/lustre-release/lustre/utils/lfs] [setstripe] [-S] [131072] [-i] [1] [-c] [1] [file2-1-1] running as uid/gid/euid/egid 500/500/500/500, groups: [/home/green/git/lustre-release/lustre/utils/lfs] [setstripe] [-S] [131072] [-i] [0] [-c] [2] [file2-0-2] running as uid/gid/euid/egid 500/500/500/500, groups: [/home/green/git/lustre-release/lustre/utils/lfs] [setstripe] [-S] [131072] [-i] [1] [-c] [2] [file2-1-2] running as uid/gid/euid/egid 500/500/500/500, groups: [/home/green/git/lustre-release/lustre/utils/lfs] [setstripe] [-S] [196608] [-i] [0] [-c] [1] [file3-0-1] running as uid/gid/euid/egid 500/500/500/500, groups: [/home/green/git/lustre-release/lustre/utils/lfs] [setstripe] [-S] [196608] [-i] [1] [-c] [1] [file3-1-1] running as uid/gid/euid/egid 500/500/500/500, groups: [/home/green/git/lustre-release/lustre/utils/lfs] [setstripe] [-S] [196608] [-i] [0] [-c] [2] [file3-0-2] running as uid/gid/euid/egid 500/500/500/500, groups: [/home/green/git/lustre-release/lustre/utils/lfs] [setstripe] [-S] [196608] [-i] [1] [-c] [2] [file3-1-2] running as uid/gid/euid/egid 500/500/500/500, groups: [/home/green/git/lustre-release/lustre/utils/lfs] [setstripe] [-S] [262144] [-i] [0] [-c] [1] [file4-0-1] running as uid/gid/euid/egid 500/500/500/500, groups: [/home/green/git/lustre-release/lustre/utils/lfs] [setstripe] [-S] [262144] [-i] [1] [-c] [1] [file4-1-1] running as uid/gid/euid/egid 500/500/500/500, groups: [/home/green/git/lustre-release/lustre/utils/lfs] [setstripe] [-S] [262144] [-i] [0] [-c] [2] [file4-0-2] running as uid/gid/euid/egid 500/500/500/500, groups: [/home/green/git/lustre-release/lustre/utils/lfs] [setstripe] [-S] [262144] [-i] [1] [-c] [2] [file4-1-2] running as uid/gid/euid/egid 500/500/500/500, groups: [tar] [cf] [/tmp/f102.tar] [d102j.sanity] [--xattrs] running as uid/gid/euid/egid 500/500/500/500, groups: [tar] [xf] [/tmp/f102.tar] [-C] [/mnt/lustre/d102j.sanity] [--xattrs] [--xattrs-include=lustre.*] PASS 102j (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 102k: setfattr without parameter of value shouldn't cause a crash ========================================================== 20:39:35 (1713487175) PASS 102k (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 102l: listxattr size test ============================================================================================ 20:39:38 (1713487178) listxattr as user... PASS 102l (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 102m: Ensure listxattr fails on small bufffer ================================================================== 20:39:42 (1713487182) PASS 102m (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 102n: silently ignore setxattr on internal trusted xattrs ========================================================== 20:39:45 (1713487185) setfattr: /mnt/lustre/f102n.sanity.1: Numerical result out of range PASS 102n (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 102p: check setxattr(2) correctly fails without permission ========================================================== 20:39:49 (1713487189) setfacl as user... running as uid/gid/euid/egid 500/500/500/500, groups: [setfacl] [-m] [u:500:rwx] [/mnt/lustre/f102p.sanity] setfacl: /mnt/lustre/f102p.sanity: Operation not permitted setfattr as user... running as uid/gid/euid/egid 500/500/500/500, groups: [setfattr] [-x] [system.posix_acl_access] [/mnt/lustre/f102p.sanity] setfattr: /mnt/lustre/f102p.sanity: Operation not permitted PASS 102p (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 102q: flistxattr should not return trusted.link EAs for orphans ========================================================== 20:39:52 (1713487192) PASS 102q (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 102r: set EAs with empty values =========== 20:39:56 (1713487196) getfattr: Removing leading '/' from absolute path names # file: mnt/lustre/f102r.sanity user.f102r.sanity getfattr: Removing leading '/' from absolute path names # file: mnt/lustre/d102r.sanity user.d102r.sanity getfattr: Removing leading '/' from absolute path names # file: mnt/lustre/d102r.sanity user.d102r.sanity PASS 102r (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 102s: getting nonexistent xattrs should fail ========================================================== 20:40:00 (1713487200) llite.lustre-ffff8800aaaac800.xattr_cache=0 /mnt/lustre/f102s.sanity: lustre.n102s: No such attribute /mnt/lustre/f102s.sanity: security.n102s: No such attribute /mnt/lustre/f102s.sanity: system.n102s: Operation not supported /mnt/lustre/f102s.sanity: trusted.n102s: No such attribute /mnt/lustre/f102s.sanity: user.n102s: No such attribute llite.lustre-ffff8800aaaac800.xattr_cache=1 /mnt/lustre/f102s.sanity: lustre.n102s: No such attribute /mnt/lustre/f102s.sanity: security.n102s: No such attribute /mnt/lustre/f102s.sanity: system.n102s: Operation not supported /mnt/lustre/f102s.sanity: trusted.n102s: No such attribute /mnt/lustre/f102s.sanity: user.n102s: No such attribute PASS 102s (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 102t: zero length xattr values handled correctly ========================================================== 20:40:03 (1713487203) llite.lustre-ffff8800aaaac800.xattr_cache=0 llite.lustre-ffff8800aaaac800.xattr_cache=1 PASS 102t (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 103a: acl test ============================ 20:40:06 (1713487206) /usr/bin/setfacl mdt.lustre-MDT0000.job_xattr=NONE uid=1(bin) gid=1(bin) groups=1(bin) uid=2(daemon) gid=2(daemon) groups=2(daemon),1(bin) users:x:100: Adding user daemon to group bin Adding user daemon to group bin performing cp with bin='bin' daemon='daemon' users='users'... [3] $ umask 022 -- ok [4] $ mkdir d -- ok [5] $ cd d -- ok [6] $ touch f -- ok [7] $ setfacl -m u:bin:rw f -- ok [8] $ ls -l f | awk -- '{ print $1 }' -- ok [11] $ cp f g -- ok [12] $ ls -l g | awk -- '{sub(/\./, "", $1); print $1 }' -- ok [15] $ rm g -- ok [16] $ cp -p f g -- ok [17] $ ls -l f | awk -- '{ print $1 }' -- ok [20] $ mkdir h -- ok [21] $ echo blubb > h/x -- ok [22] $ cp -rp h i -- ok [23] $ cat i/x -- ok [26] $ rm -r i -- ok [31] $ setfacl -R -m u:bin:rwx h -- ok [32] $ getfacl --omit-header h/x -- ok [40] $ cp -rp h i -- ok [41] $ getfacl --omit-header i/x -- ok [49] $ cd .. -- ok [50] $ rm -r d -- ok 22 commands (22 passed, 0 failed) performing getfacl-noacl with bin='bin' daemon='daemon' users='users'... [4] $ mkdir test -- ok [5] $ cd test -- ok [6] $ umask 027 -- ok [7] $ touch x -- ok [8] $ getfacl --omit-header x -- ok [14] $ getfacl --omit-header --access x -- ok [20] $ getfacl --omit-header -d x -- ok [21] $ getfacl --omit-header -d . -- ok [22] $ getfacl --omit-header -d / -- ok [25] $ getfacl --skip-base x -- ok [26] $ getfacl --omit-header --all-effective x -- ok [32] $ getfacl --omit-header --no-effective x -- ok [38] $ mkdir d -- ok [39] $ touch d/y -- ok [46] $ getfacl -dRP . | grep file | sort -- ok [51] $ ln -s d l -- ok [53] $ ln -s l ll -- ok [62] $ rm l ll x -- ok [63] $ rm -rf d -- ok [64] $ cd .. -- ok [65] $ rmdir test -- ok 21 commands (21 passed, 0 failed) performing misc with bin='bin' daemon='daemon' users='users'... [6] $ umask 027 -- ok [7] $ touch f -- ok [10] $ setfacl -m u::r f -- ok [11] $ setfacl -m u::rw,u:bin:rw f -- ok [12] $ ls -dl f | awk '{print $1}' -- ok [15] $ getfacl --omit-header f -- ok [23] $ rm f -- ok [24] $ umask 022 -- ok [25] $ touch f -- ok [26] $ setfacl -m u:bin:rw f -- ok [27] $ ls -dl f | awk '{print $1}' -- ok [30] $ getfacl --omit-header f -- ok [38] $ rm f -- ok [39] $ umask 027 -- ok [40] $ mkdir d -- ok [41] $ setfacl -m u:bin:rwx d -- ok [42] $ ls -dl d | awk '{print $1}' -- ok [45] $ getfacl --omit-header d -- ok [53] $ rmdir d -- ok [54] $ umask 022 -- ok [55] $ mkdir d -- ok [56] $ setfacl -m u:bin:rwx d -- ok [57] $ ls -dl d | awk '{print $1}' -- ok [60] $ getfacl --omit-header d -- ok [68] $ rmdir d -- ok [73] $ umask 022 -- ok [74] $ touch f -- ok [75] $ setfacl -m u:bin:rw,u:daemon:r f -- ok [76] $ ls -dl f | awk '{print $1}' -- ok [79] $ getfacl --omit-header f -- ok [90] $ setfacl -m g:users:rw,g:daemon:r f -- ok [91] $ ls -dl f | awk '{print $1}' -- ok [94] $ getfacl --omit-header f -- ok [107] $ setfacl -x g:users f -- ok [108] $ ls -dl f | awk '{print $1}' -- ok [111] $ getfacl --omit-header f -- ok [123] $ setfacl -x u:daemon f -- ok [124] $ ls -dl f | awk '{print $1}' -- ok [127] $ getfacl --omit-header f -- ok [136] $ rm f -- ok [140] $ umask 027 -- ok [141] $ mkdir d -- ok [142] $ setfacl -m u:bin:rwx,u:daemon:rw,d:u:bin:rwx,d:m:rx d -- ok [143] $ ls -dl d | awk '{print $1}' -- ok [146] $ getfacl --omit-header d -- ok [162] $ umask 027 -- ok [163] $ touch d/f -- ok [164] $ ls -dl d/f | awk '{print $1}' -- ok [167] $ getfacl --omit-header d/f -- ok [175] $ rm d/f -- ok [176] $ umask 022 -- ok [177] $ touch d/f -- ok [178] $ ls -dl d/f | awk '{print $1}' -- ok [181] $ getfacl --omit-header d/f -- ok [189] $ rm d/f -- ok [193] $ umask 000 -- ok [194] $ mkdir d/d -- ok [195] $ ls -dl d/d | awk '{print $1}' -- ok [198] $ getfacl --omit-header d/d -- ok [211] $ rmdir d/d -- ok [212] $ umask 022 -- ok [213] $ mkdir d/d -- ok [214] $ ls -dl d/d | awk '{print $1}' -- ok [217] $ getfacl --omit-header d/d -- ok [232] $ setfacl -nm u:daemon:rx,d:u:daemon:rx,g:users:rx,g:daemon:rwx d/d -- ok [233] $ ls -dl d/d | awk '{print $1}' -- ok [236] $ getfacl --omit-header d/d -- ok [256] $ ln -s d d/l -- ok [257] $ ls -dl d/l | awk '{ sub(/\.$/, "", $1); print $1 }' -- ok [260] $ ls -dl -L d/l | awk '{print $1}' -- ok [265] $ cd d -- ok [266] $ getfacl --omit-header l -- ok [283] $ cd .. -- ok [285] $ rm d/l -- ok [289] $ setfacl -m g:daemon:rx,u:bin:rx d/d -- ok [290] $ ls -dl d/d | awk '{print $1}' -- ok [293] $ getfacl --omit-header d/d -- ok [310] $ setfacl -m d:u:bin:rwx d/d -- ok [311] $ ls -dl d/d | awk '{print $1}' -- ok [314] $ getfacl --omit-header d/d -- ok [331] $ rmdir d/d -- ok [335] $ setfacl -k d -- ok [336] $ ls -dl d | awk '{print $1}' -- ok [339] $ getfacl --omit-header d -- ok [350] $ setfacl -b d -- ok [351] $ ls -dl d | awk '{sub(/\./, "", $1); print $1}' -- ok [354] $ getfacl --omit-header d -- ok [362] $ chmod 775 d -- ok [363] $ ls -dl d | awk '{sub(/\./, "", $1); print $1}' -- ok [366] $ getfacl --omit-header d -- ok [372] $ rmdir d -- ok [373] $ umask 002 -- ok [374] $ mkdir d -- ok [375] $ setfacl -m u:daemon:rwx,u:bin:rx,d:u:daemon:rwx,d:u:bin:rx d -- ok [376] $ ls -dl d | awk '{print $1}' -- ok [379] $ getfacl --omit-header d -- ok [394] $ chmod 750 d -- ok [395] $ ls -dl d | awk '{print $1}' -- ok [398] $ getfacl --omit-header d -- ok [413] $ chmod 750 d -- ok [414] $ ls -dl d | awk '{print $1}' -- ok [417] $ getfacl --omit-header d -- ok [432] $ rmdir d -- ok 103 commands (103 passed, 0 failed) performing permissions with bin='bin' daemon='daemon' users='users'... [12] $ id -u -- ok [19] $ mkdir d -- ok [20] $ cd d -- ok [21] $ umask 027 -- ok [22] $ touch f -- ok [23] $ ls -l f | awk -- '{sub(/\./, "", $1); print $1, $3, $4 }' -- ok [30] $ echo root > f -- ok [32] $ su daemon -- ok [33] $ echo daemon >> f -- ok [36] $ su -- ok [42] $ chown bin:bin f -- ok [43] $ ls -l f | awk -- '{sub(/\./, "", $1); print $1, $3, $4 }' -- ok [45] $ su bin -- ok [46] $ echo bin >> f -- ok [52] $ su daemon -- ok [53] $ cat f -- ok [57] $ echo daemon >> f -- ok [64] $ su bin -- ok [65] $ setfacl -m u:daemon:rw f -- ok [66] $ getfacl --omit-header f -- ok [77] $ su daemon -- ok [78] $ echo daemon >> f -- ok [79] $ cat f -- ok [88] $ su bin -- ok [89] $ chmod g-w f -- ok [90] $ getfacl --omit-header f -- ok [98] $ su daemon -- ok [99] $ echo daemon >> f -- ok [108] $ su bin -- ok [109] $ setfacl -m u:daemon:r,g:daemon:rw-,o::rw- f -- ok [111] $ su daemon -- ok [112] $ echo daemon >> f -- ok [119] $ su bin -- ok [120] $ setfacl -x u:daemon f -- ok [122] $ su daemon -- ok [123] $ echo daemon2 >> f -- ok [124] $ cat f -- ok [134] $ su bin -- ok [135] $ setfacl -m g:daemon:r f -- ok [137] $ su daemon -- ok [138] $ echo daemon3 >> f -- ok [145] $ su bin -- ok [146] $ setfacl -x g:daemon f -- ok [148] $ su daemon -- ok [149] $ echo daemon4 >> f -- ok [156] $ su -- ok [157] $ chgrp root f -- ok [159] $ su daemon -- ok [160] $ echo daemon5 >> f -- ok [161] $ cat f -- ok [172] $ su -- ok [173] $ setfacl -m g:bin:r,g:daemon:w f -- ok [175] $ su daemon -- ok [176] $ : < f -- ok [177] $ : > f -- ok [178] $ : <> f -- ok [186] $ su -- ok [187] $ mkdir -m 750 e -- ok [188] $ touch e/h -- ok [190] $ su bin -- ok [191] $ shopt -s nullglob ; echo e/* -- ok [194] $ echo i > e/i -- ok [197] $ su -- ok [198] $ setfacl -m u:bin:rx e -- ok [200] $ su bin -- ok [201] $ echo e/* -- ok [208] $ touch e/i 2>&1 | sed -e "s/touch .*e\/i.*:/touch \'e\/i\':/" -- ok [211] $ su -- ok [212] $ setfacl -m u:bin:rwx e -- ok [214] $ su bin -- ok [215] $ echo i > e/i -- ok [220] $ su -- ok [221] $ touch g -- ok [222] $ ln -s g l -- ok [223] $ setfacl -m u:bin:rw l -- ok [224] $ ls -l g | awk -- '{ print $1, $3, $4 }' -- ok [234] $ mknod -m 0660 hdt b 91 64 -- ok [235] $ mknod -m 0660 null c 1 3 -- ok [236] $ mkfifo -m 0660 fifo -- ok [238] $ su bin -- ok [239] $ : < hdt -- ok [241] $ : < null -- ok [243] $ : < fifo -- ok [246] $ su -- ok [247] $ setfacl -m u:bin:rw hdt null fifo -- ok [249] $ su bin -- ok [250] $ : < hdt -- ok [252] $ : < null -- ok [253] $ ( echo blah > fifo & ) ; cat fifo -- ok [261] $ su -- ok [262] $ mkdir -m 600 x -- ok [263] $ chown daemon:daemon x -- ok [264] $ echo j > x/j -- ok [265] $ ls -l x/j | awk -- '{sub(/\./, "", $1); print $1, $3, $4 }' -- ok [268] $ setfacl -m u:daemon:r x -- ok [270] $ ls -l x/j | awk -- '{sub(/\./, "", $1); print $1, $3, $4 }' -- ok [274] $ echo k > x/k -- ok [277] $ chmod 750 x -- ok [282] $ su -- ok [283] $ cd .. -- ok [284] $ rm -rf d -- ok 101 commands (101 passed, 0 failed) 99 nobody:x:99: /usr/bin/setfattr performing permissions_xattr with bin='bin' daemon='daemon' users='users'... [11] $ id -u -- ok [19] $ mkdir d -- ok [20] $ cd d -- ok [21] $ umask 027 -- ok [22] $ touch f -- ok [23] $ chown nobody:nobody f -- ok [24] $ ls -l f | awk -- '{ sub(/\.$/, "", $1); print $1, $3, $4 }' -- ok [26] $ su nobody -- ok [27] $ echo nobody > f -- ok [33] $ su bin -- ok [34] $ setfattr -n user.test.xattr -v 123456 f -- ok [41] $ su nobody -- ok [42] $ setfacl -m g:bin:rw f -- ok [43] $ getfacl --omit-header f -- ok [55] $ su bin -- ok [56] $ setfattr -n user.test.xattr -v 123456 f -- ok [57] $ getfattr -d f -- ok [66] $ su -- ok [67] $ ln -s f l -- ok [68] $ ls -l l | awk -- '{ sub(/\.$/, "", $1); print $1, $3, $4 }' -- ok [70] $ su bin -- ok [71] $ getfattr -d l -- ok [81] $ su -- ok [82] $ mkdir t -- ok [83] $ chown nobody:nobody t -- ok [84] $ chmod 1750 t -- ok [85] $ ls -dl t | awk -- '{ sub(/\.$/, "", $1); print $1, $3, $4 }' -- ok [87] $ su nobody -- ok [88] $ setfacl -m g:bin:rwx t -- ok [89] $ getfacl --omit-header t -- ok [96] $ su bin -- ok [97] $ setfattr -n user.test.xattr -v 654321 t -- ok [105] $ su -- ok [106] $ mkdir d -- ok [107] $ chown nobody:nobody d -- ok [108] $ chmod 750 d -- ok [109] $ ls -dl d | awk -- '{ sub(/\.$/, "", $1); print $1, $3, $4 }' -- ok [111] $ su nobody -- ok [112] $ setfacl -m g:bin:rwx d -- ok [113] $ getfacl --omit-header d -- ok [120] $ su bin -- ok [121] $ setfattr -n user.test.xattr -v 654321 d -- ok [122] $ getfattr -d d -- ok [131] $ su -- ok [132] $ mknod -m 0660 hdt b 91 64 -- ok [133] $ mknod -m 0660 null c 1 3 -- ok [134] $ mkfifo -m 0660 fifo -- ok [135] $ setfattr -n user.test.xattr -v 123456 hdt -- ok [137] $ setfattr -n user.test.xattr -v 123456 null -- ok [139] $ setfattr -n user.test.xattr -v 123456 fifo -- ok [145] $ su -- ok [146] $ cd .. -- ok [147] $ rm -rf d -- ok 53 commands (53 passed, 0 failed) performing setfacl with bin='bin' daemon='daemon' users='users'... [3] $ mkdir d -- ok [4] $ chown bin:bin d -- ok [5] $ cd d -- ok [7] $ su bin -- ok [8] $ sg bin -- [(1,0)(1 1,1 1)]ok [9] $ umask 027 -- ok [10] $ touch g -- ok [11] $ ls -dl g | awk '{sub(/\./, "", $1); print $1}' -- ok [14] $ setfacl -m m:- g -- ok [15] $ ls -dl g | awk '{print $1}' -- ok [18] $ getfacl g -- ok [28] $ setfacl -x m g -- ok [29] $ getfacl g -- ok [38] $ setfacl -m u:daemon:rw g -- ok [39] $ getfacl g -- ok [50] $ setfacl -m u::rwx,g::r-x,o:- g -- ok [51] $ getfacl g -- ok [62] $ setfacl -m u::rwx,g::r-x,o:-,m:- g -- ok [63] $ getfacl g -- ok [74] $ setfacl -m u::rwx,g::r-x,o:-,u:root:-,m:- g -- ok [75] $ getfacl g -- ok [87] $ setfacl -m u::rwx,g::r-x,o:-,u:root:-,m:- g -- ok [88] $ getfacl g -- ok [100] $ setfacl -m u::rwx,g::r-x,o:-,u:root:- g -- ok [101] $ getfacl g -- ok [113] $ setfacl --test -x u: g -- ok [116] $ setfacl --test -x u:x -- ok [119] $ setfacl -m d:u:root:rwx g -- ok [122] $ setfacl -x m g -- ok [129] $ mkdir d -- ok [130] $ setfacl --test -m u::rwx,u:bin:rwx,g::r-x,o::--- d -- ok [133] $ setfacl --test -m u::rwx,u:bin:rwx,g::r-x,m::---,o::--- d -- ok [136] $ setfacl --test -d -m u::rwx,u:bin:rwx,g::r-x,o::--- d -- ok [139] $ setfacl --test -d -m u::rwx,u:bin:rwx,g::r-x,m::---,o::--- d -- ok [142] $ su -- ok [143] $ cd .. -- ok [144] $ rm -r d -- ok 37 commands (37 passed, 0 failed) performing inheritance with bin='bin' daemon='daemon' users='users'... [4] $ id -u -- ok [7] $ mkdir d -- ok [8] $ setfacl -d -m group:bin:r-x d -- ok [9] $ getfacl d -- ok [23] $ mkdir d/subdir -- ok [24] $ getfacl d/subdir -- ok [40] $ touch d/f -- ok [41] $ ls -l d/f | awk -- '{ print $1 }' -- ok [43] $ getfacl d/f -- ok [54] $ su bin -- ok [55] $ echo i >> d/f -- ok [62] $ su -- ok [63] $ rm d/f -- ok [64] $ rmdir d/subdir -- ok [65] $ mv d tree -- ok [66] $ ./make-tree -- ok [67] $ getfacl tree/dir0/dir5/file4 -- ok [77] $ getfacl tree/dir0/dir6/file4 -- ok [87] $ echo i >> tree/dir6/dir2/file2 -- ok [88] $ echo i > tree/dir1/f -- ok [89] $ ls -l tree/dir1/f | awk -- '{ print $1 }' -- ok [98] $ rm -rf tree -- ok 22 commands (22 passed, 0 failed) LU-974 ignore umask when acl is enabled... performing 974 with bin='bin' daemon='daemon' users='users'... [3] $ umask 022 -- ok [4] $ mkdir 974 -- ok [6] $ touch 974/f1 -- ok [7] $ ls -dl 974/f1 | awk '{sub(/\./, "", $1); print $1 }' -- ok [10] $ setfacl -R -d -m mask:007 974 -- ok [11] $ touch 974/f2 -- ok [12] $ ls -dl 974/f2 | awk '{ print $1 }' -- ok [15] $ umask 077 -- ok [16] $ touch f3 -- ok [17] $ ls -dl f3 | awk '{sub(/\./, "", $1); print $1 }' -- ok [20] $ rm -rf 974 -- ok 11 commands (11 passed, 0 failed) LU-2561 newly created file is same size as directory... performing 2561_zfs with bin='bin' daemon='daemon' users='users'... [3] $ mkdir -p 2561 -- ok [4] $ cd 2561 -- ok [5] $ getfacl --access . | setfacl -d -M- . -- ok [6] $ touch f1 -- ok [7] $ stat -c '%s' f1 -- ok [9] $ cd .. -- ok [10] $ rm -rf 2561 -- ok 7 commands (7 passed, 0 failed) performing 4924 with bin='bin' daemon='daemon' users='users'... [3] $ mkdir 4924 -- ok [4] $ cd 4924 -- ok [5] $ touch f -- ok [6] $ chmod u=rwx,g=rwxs f -- ok [7] $ ls -l f | awk -- '{sub(/\./, "", $1); print $1, $3, $4 }' -- ok [9] $ touch f -- ok [10] $ ls -l f | awk -- '{sub(/\./, "", $1); print $1, $3, $4 }' -- ok [12] $ cd .. -- ok [13] $ rm -rf 4924 -- ok 9 commands (9 passed, 0 failed) mdt.lustre-MDT0000.job_xattr=user.job PASS 103a (42s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 103b: umask lfs setstripe ================= 20:40:50 (1713487250) PASS 103b (35s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 103c: 'cp -rp' won't set empty acl ======== 20:41:27 (1713487287) getfattr: Removing leading '/' from absolute path names getfattr: Removing leading '/' from absolute path names PASS 103c (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 103e: inheritance of big amount of default ACLs ========================================================== 20:41:30 (1713487290) mdc.lustre-MDT0000-mdc-ffff8800aaaac800.stats=clear debug=0 7000 default ACLs created File: '/mnt/lustre/d103e.sanity' Size: 68096 Blocks: 133 IO Block: 1048576 directory Device: 2c54f966h/743766374d Inode: 144115205406722903 Links: 2 Access: (0755/drwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2024-04-18 20:41:30.000000000 -0400 Modify: 2024-04-18 20:41:30.000000000 -0400 Change: 2024-04-18 20:43:33.000000000 -0400 Birth: - File: '/mnt/lustre/d103e.sanity/f103e.sanity' Size: 0 Blocks: 1000 IO Block: 4194304 regular empty file Device: 2c54f966h/743766374d Inode: 144115205406722904 Links: 1 Access: (0664/-rw-rw-r--) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2024-04-18 20:43:34.000000000 -0400 Modify: 2024-04-18 20:43:34.000000000 -0400 Change: 2024-04-18 20:43:34.000000000 -0400 Birth: - 7000 ACLs were inherited setfacl: /mnt/lustre/d103e.sanity/f103e.sanity: Argument list too long Added 1187 more ACLs to the file Total 8188 ACLs in file mdc.lustre-MDT0000-mdc-ffff8800aaaac800.stats= snapshot_time 1713487519.967279035 secs.nsecs start_time 1713487290.807266546 secs.nsecs elapsed_time 229.160012489 secs.nsecs req_waittime 85226 samples [usecs] 524 122946 111880229 259721250781 req_active 93123 samples [reqs] 1 2 123432 184050 ldlm_ibits_enqueue 28606 samples [reqs] 1 1 28606 28606 mds_close 1 samples [usecs] 2263 2263 2263 5121169 mds_getxattr 20121 samples [usecs] 524 2818 18450417 17770334265 ldlm_cancel 28308 samples [usecs] 524 3249 24464102 21808388426 debug=trace inode super iotrace malloc cache info ioctl neterror net warning buffs other dentry nettrace page dlmtrace error emerg ha rpctrace vfstrace reada mmap config console quota sec lfsck hsm snapshot layout PASS 103e (229s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 103f: changelog doesn't interfere with default ACLs buffers ========================================================== 20:45:21 (1713487521) mdd.lustre-MDT0000.changelog_mask=+hsm Registered 1 changelog users: 'cl1' lustre-MDT0000: clear the changelog for cl1 of all records lustre-MDT0000: Deregistered changelog user #1 PASS 103f (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 104a: lfs df [-ih] [path] test =================================================================================== 20:45:26 (1713487526) UUID 1K-blocks Used Available Use% Mounted on lustre-MDT0000_UUID 2210688 5120 2203520 1% /mnt/lustre[MDT:0] lustre-OST0000_UUID 3771392 5120 3764224 1% /mnt/lustre[OST:0] lustre-OST0001_UUID 3771392 3072 3766272 1% /mnt/lustre[OST:1] filesystem_summary: 7542784 8192 7530496 1% /mnt/lustre UUID Inodes IUsed IFree IUse% Mounted on lustre-MDT0000_UUID 461.9K 511 461.4K 1% /mnt/lustre[MDT:0] lustre-OST0000_UUID 115.9K 998 114.9K 1% /mnt/lustre[OST:0] lustre-OST0001_UUID 116.0K 975 115.0K 1% /mnt/lustre[OST:1] filesystem_summary: 230.4K 511 229.9K 1% /mnt/lustre UUID bytes Used Available Use% Mounted on lustre-MDT0000_UUID 2.1G 5.0M 2.1G 1% /mnt/lustre[MDT:0] lustre-OST0000_UUID 3.6G 5.0M 3.6G 1% /mnt/lustre[OST:0] lustre-OST0001_UUID 3.6G 3.0M 3.6G 1% /mnt/lustre[OST:1] filesystem_summary: 7.2G 8.0M 7.2G 1% /mnt/lustre UUID Inodes IUsed IFree IUse% Mounted on lustre-MDT0000_UUID 473017 511 472506 1% /mnt/lustre[MDT:0] lustre-OST0000_UUID 118694 998 117696 1% /mnt/lustre[OST:0] lustre-OST0001_UUID 118735 975 117760 1% /mnt/lustre[OST:1] filesystem_summary: 235967 511 235456 1% /mnt/lustre UUID 1K-blocks Used Available Use% Mounted on lustre-MDT0000_UUID 2210688 5120 2203520 1% /mnt/lustre[MDT:0] lustre-OST0000_UUID 3771392 5120 3764224 1% /mnt/lustre[OST:0] lustre-OST0001_UUID 3771392 3072 3766272 1% /mnt/lustre[OST:1] filesystem_summary: 7542784 8192 7530496 1% /mnt/lustre UUID Inodes IUsed IFree IUse% Mounted on lustre-MDT0000_UUID 461.9K 511 461.4K 1% /mnt/lustre[MDT:0] lustre-OST0000_UUID 115.9K 998 114.9K 1% /mnt/lustre[OST:0] lustre-OST0001_UUID 116.0K 975 115.0K 1% /mnt/lustre[OST:1] filesystem_summary: 230.4K 511 229.9K 1% /mnt/lustre UUID 1K-blocks Used Available Use% Mounted on lustre-MDT0000_UUID 2210688 5120 2203520 1% /mnt/lustre[MDT:0] lustre-OST0001_UUID 3771392 3072 3766272 1% /mnt/lustre[OST:1] filesystem_summary: 3771392 3072 3766272 1% /mnt/lustre oleg216-client.virtnet: executing wait_import_state (FULL|IDLE) osc.lustre-OST0000-osc-ffff8800aaaac800.ost_server_uuid 50 osc.lustre-OST0000-osc-ffff8800aaaac800.ost_server_uuid in FULL state after 0 sec UUID 1K-blocks Used Available Use% Mounted on lustre-MDT0000_UUID 2210688 5120 2203520 1% /mnt/lustre[MDT:0] lustre-OST0000_UUID 3771392 5120 3764224 1% /mnt/lustre[OST:0] lustre-OST0001_UUID 3771392 3072 3766272 1% /mnt/lustre[OST:1] filesystem_summary: 7542784 8192 7530496 1% /mnt/lustre PASS 104a (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 104b: runas -u 500 -g 500 lfs check servers test ============================================================================== 20:45:30 (1713487530) PASS 104b (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 104c: Verify df vs lfs_df stays same after recordsize change ========================================================== 20:45:33 (1713487533) Before recordsize change lfs output : filesystem_summary: 7.2G 7.0M 7.2G 1% /mnt/lustre df output : 192.168.202.116@tcp:/lustre 7.2G 7.0M 7.2G 1% /mnt/lustre OST Blocksize: 1048576 MDT Blocksize: 131072 After recordsize change lfs output : filesystem_summary: 7.2G 7.6M 7.2G 1% /mnt/lustre df output : 192.168.202.116@tcp:/lustre 7.2G 7.0M 7.2G 1% /mnt/lustre PASS 104c (7s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 104d: runas -u 500 -g 500 lctl dl test ==== 20:45:42 (1713487542) running as uid/gid/euid/egid 500/500/500/500, groups: [/home/green/git/lustre-release/lustre/utils/lctl] [dl] running as uid/gid/euid/egid 500/500/500/500, groups: [/home/green/git/lustre-release/lustre/utils/lctl] [dl] running as uid/gid/euid/egid 500/500/500/500, groups: [/home/green/git/lustre-release/lustre/utils/lctl] [dl] PASS 104d (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 105a: flock when mounted without -o flock test ================================================================== 20:45:45 (1713487545) PASS 105a (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 105b: fcntl when mounted without -o flock test ================================================================== 20:45:48 (1713487548) PASS 105b (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 105c: lockf when mounted without -o flock test ========================================================== 20:45:51 (1713487551) PASS 105c (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 105d: flock race (should not freeze) ================================================================== 20:45:54 (1713487554) fail_loc=0x80000315 fcntl cmd 7 failed: Input/output error fcntl cmd 5 failed: Invalid argument thread 1: set write lock (blocking): rc = 0 thread 2: unlock: rc = 0 thread 2: unlock done: rc = 0 thread 2: set write lock (non-blocking): rc = 0 thread 2: set write lock done: rc = 0 thread 1: set write lock done: rc = 0 thread 1: unlock: rc = 0 thread 1: unlock done: rc = 0 PASS 105d (12s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 105e: Two conflicting flocks from same process ========================================================== 20:46:07 (1713487567) PASS 105e (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 105f: Enqueue same range flocks =========== 20:46:11 (1713487571) Time for processing 1.002s Time for processing 0.977s Time for processing 1.007s Time for processing 0.999s Time for processing 1.009s Time for processing 1.010s Time for processing 1.006s Time for processing 1.019s Time for processing 1.016s Time for processing 1.010s Time for processing 1.005s Time for processing 1.032s Time for processing 1.023s Time for processing 1.018s Time for processing 1.006s Time for processing 1.025s Time for processing 1.018s Time for processing 1.028s Time for processing 1.011s Time for processing 1.012s Time for processing 1.026s PASS 105f (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 106: attempt exec of dir followed by chown of that dir ========================================================== 20:46:15 (1713487575) /home/green/git/lustre-release/lustre/tests/sanity.sh: line 13084: /mnt/lustre/d106.sanity: Is a directory PASS 106 (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 107: Coredump on SIG ====================== 20:46:18 (1713487578) kernel.core_pattern = core kernel.core_uses_pid = 0 /home/green/git/lustre-release/lustre/tests/sanity.sh: line 13110: 7269 Segmentation fault (core dumped) sleep 60 kernel.core_pattern = core kernel.core_uses_pid = 1 PASS 107 (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 110: filename length checking ============= 20:46:22 (1713487582) lfs mkdir: dirstripe error on '/mnt/lustre/d110.sanity/bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb': File name too long lfs setdirstripe: cannot create dir '/mnt/lustre/d110.sanity/bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb': File name too long touch: cannot touch '/mnt/lustre/d110.sanity/yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy': File name too long total 2 drwxr-xr-x 2 root root 1024 Apr 18 20:46 aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa -rw-r--r-- 1 root root 0 Apr 18 20:46 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx PASS 110 (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 116a: stripe QOS: free space balance ============================================================================= 20:46:26 (1713487586) Free space priority 90% sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete OST kbytes available: 3765248 3766272 Min free space: OST 0: 3765248 Max free space: OST 1: 3766272 Check for uneven OSTs: diff=1024KB (0%) must be > 17% ...no Fill 19% remaining space in OST0 with 715397KB .............................................................................................................................................................................................................................................................................................................................................................. sleep 5 for ZFS zfs Waiting for MDT destroys to complete OST kbytes available: 3046400 3766272 Min free space: OST 0: 3046400 Max free space: OST 1: 3766272 diff=719872=23% must be > 17% for QOS mode...ok writing 600 files to QOS-assigned OSTs ........................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................wrote 600 200k files Note: free space may not be updated, so measurements might be off sleep 5 for ZFS zfs Waiting for MDT destroys to complete OST kbytes available: 2977792 3682304 Min free space: OST 0: 2977792 Max free space: OST 1: 3682304 free space delta: orig 719872 final 704512 Wrote 68608KB to smaller OST 0 Wrote 83968KB to larger OST 1 Wrote 22% more data to larger OST 1 lustre-OST0000_UUID 276 files created on smaller OST 0 lustre-OST0001_UUID 324 files created on larger OST 1 Wrote 17% more files to larger OST 1 sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete cleanup time 24 PASS 116a (110s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 116b: QoS shouldn't LBUG if not enough OSTs found on the 2nd pass ========================================================== 20:48:17 (1713487697) lod.lustre-MDT0000-mdtlov.qos_threshold_rr=0 lov.lustre-MDT0000-mdtlov.qos_threshold_rr=0 fail_loc=0x147 total: 20 open/close in 0.11 seconds: 175.55 ops/second fail_loc=0 lod.lustre-MDT0000-mdtlov.qos_threshold_rr=17% lov.lustre-MDT0000-mdtlov.qos_threshold_rr=17% PASS 116b (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 117: verify osd extend ==================== 20:48:22 (1713487702) 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0412058 s, 25.4 MB/s fail_loc=0x21e fail_loc=0 Truncate succeeded. PASS 117 (2s) debug_raw_pointers=0 debug_raw_pointers=0 resend_count is set to 4 4 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 118a: verify O_SYNC works ================= 20:48:25 (1713487705) 2+0 records in 2+0 records out 131072 bytes (131 kB) copied, 0.00649804 s, 20.2 MB/s PASS 118a (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 118b: Reclaim dirty pages on fatal error ==================================================================== 20:48:29 (1713487709) 2+0 records in 2+0 records out 131072 bytes (131 kB) copied, 0.00630039 s, 20.8 MB/s fail_val=0 fail_loc=0x217 write: No such file or directory fail_val=0 fail_loc=0 Dirty pages not leaked on ENOENT PASS 118b (2s) debug_raw_pointers=0 debug_raw_pointers=0 SKIP: sanity test_118c skipping ALWAYS excluded test 118c resend_count is set to 4 4 SKIP: sanity test_118d skipping ALWAYS excluded test 118d debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 118f: Simulate unrecoverable OSC side error ==================================================================== 20:48:34 (1713487714) 2+0 records in 2+0 records out 131072 bytes (131 kB) copied, 0.00678353 s, 19.3 MB/s fail_loc=0x8000040a write: Input/output error fail_loc=0x0 No pages locked after fsync 2+0 records in 2+0 records out 131072 bytes (131 kB) copied, 0.00705927 s, 18.6 MB/s PASS 118f (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 118g: Don't stay in wait if we got local -ENOMEM ==================================================================== 20:48:37 (1713487717) 2+0 records in 2+0 records out 131072 bytes (131 kB) copied, 0.00630078 s, 20.8 MB/s fail_loc=0x406 write: Input/output error fail_loc=0 No pages locked after fsync 2+0 records in 2+0 records out 131072 bytes (131 kB) copied, 0.00606827 s, 21.6 MB/s PASS 118g (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 118h: Verify timeout in handling recoverables errors ==================================================================== 20:48:40 (1713487720) 2+0 records in 2+0 records out 131072 bytes (131 kB) copied, 0.00716111 s, 18.3 MB/s fail_val=0 fail_loc=0x20e write: Input/output error fail_val=0 fail_loc=0 No pages locked after fsync PASS 118h (13s) debug_raw_pointers=0 debug_raw_pointers=0 resend_count is set to 4 4 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 118i: Fix error before timeout in recoverable error ==================================================================== 20:48:54 (1713487734) 2+0 records in 2+0 records out 131072 bytes (131 kB) copied, 0.0228772 s, 5.7 MB/s fail_val=0 fail_loc=0x20e fail_val=0 fail_loc=0 No pages locked after fsync PASS 118i (8s) debug_raw_pointers=0 debug_raw_pointers=0 resend_count is set to 4 4 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 118j: Simulate unrecoverable OST side error ==================================================================== 20:49:04 (1713487744) 2+0 records in 2+0 records out 131072 bytes (131 kB) copied, 0.00690086 s, 19.0 MB/s fail_val=0 fail_loc=0x220 write: Bad address fail_val=0 fail_loc=0x0 No pages locked after fsync PASS 118j (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 118k: bio alloc -ENOMEM and IO TERM handling =================================================================== 20:49:08 (1713487748) fail_val=0 fail_loc=0x20e /home/green/git/lustre-release/lustre/tests/sanity.sh: line 13665: 19157 Terminated ( dd if=/dev/zero of=$DIR/$tdir/$tfile-$i bs=1M count=10 || error "dd to $DIR/$tdir/$tfile-$i failed" ) /home/green/git/lustre-release/lustre/tests/sanity.sh: line 13665: 19161 Terminated ( dd if=/dev/zero of=$DIR/$tdir/$tfile-$i bs=1M count=10 || error "dd to $DIR/$tdir/$tfile-$i failed" ) /home/green/git/lustre-release/lustre/tests/sanity.sh: line 13665: 19164 Terminated ( dd if=/dev/zero of=$DIR/$tdir/$tfile-$i bs=1M count=10 || error "dd to $DIR/$tdir/$tfile-$i failed" ) /home/green/git/lustre-release/lustre/tests/sanity.sh: line 13665: 19167 Terminated ( dd if=/dev/zero of=$DIR/$tdir/$tfile-$i bs=1M count=10 || error "dd to $DIR/$tdir/$tfile-$i failed" ) /home/green/git/lustre-release/lustre/tests/sanity.sh: line 13665: 19171 Terminated ( dd if=/dev/zero of=$DIR/$tdir/$tfile-$i bs=1M count=10 || error "dd to $DIR/$tdir/$tfile-$i failed" ) /home/green/git/lustre-release/lustre/tests/sanity.sh: line 13665: 19174 Terminated ( dd if=/dev/zero of=$DIR/$tdir/$tfile-$i bs=1M count=10 || error "dd to $DIR/$tdir/$tfile-$i failed" ) /home/green/git/lustre-release/lustre/tests/sanity.sh: line 13665: 19178 Terminated ( dd if=/dev/zero of=$DIR/$tdir/$tfile-$i bs=1M count=10 || error "dd to $DIR/$tdir/$tfile-$i failed" ) /home/green/git/lustre-release/lustre/tests/sanity.sh: line 13665: 19181 Terminated ( dd if=/dev/zero of=$DIR/$tdir/$tfile-$i bs=1M count=10 || error "dd to $DIR/$tdir/$tfile-$i failed" ) /home/green/git/lustre-release/lustre/tests/sanity.sh: line 13665: 19184 Terminated ( dd if=/dev/zero of=$DIR/$tdir/$tfile-$i bs=1M count=10 || error "dd to $DIR/$tdir/$tfile-$i failed" ) /home/green/git/lustre-release/lustre/tests/sanity.sh: line 13665: 19188 Terminated ( dd if=/dev/zero of=$DIR/$tdir/$tfile-$i bs=1M count=10 || error "dd to $DIR/$tdir/$tfile-$i failed" ) fail_val=0 fail_loc=0 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 1.62047 s, 6.5 MB/s 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 3.88911 s, 2.7 MB/s 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 6.43168 s, 1.6 MB/s 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 4.47673 s, 2.3 MB/s 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 3.57764 s, 2.9 MB/s 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 2.64182 s, 4.0 MB/s 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 5.691 s, 1.8 MB/s 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 3.18104 s, 3.3 MB/s 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 6.21908 s, 1.7 MB/s 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 6.4885 s, 1.6 MB/s PASS 118k (9s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 118l: fsync dir =========================== 20:49:18 (1713487758) PASS 118l (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 118m: fdatasync dir ======================= 20:49:22 (1713487762) PASS 118m (1s) debug_raw_pointers=0 debug_raw_pointers=0 resend_count is set to 4 4 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 118n: statfs() sends OST_STATFS requests in parallel ========================================================== 20:49:25 (1713487765) fail_val=0 fail_loc=0x242 fail_val=0 fail_loc=0 PASS 118n (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 119a: Short directIO read must return actual read amount ========================================================== 20:49:31 (1713487771) directio on /mnt/lustre/f119a.sanity for 1x524288 bytes PASS PASS 119a (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 119b: Sparse directIO read must return actual read amount ========================================================== 20:49:34 (1713487774) 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0305229 s, 34.4 MB/s PASS 119b (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 119c: Testing for direct read hitting hole ========================================================== 20:49:37 (1713487777) directio on /mnt/lustre/f119c.sanity for 1x1048576 bytes PASS directio on /mnt/lustre/f119c.sanity for 2x1048576 bytes PASS PASS 119c (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 119e: Basic tests of dio read and write at various sizes ========================================================== 20:49:41 (1713487781) 1+0 records in 1+0 records out 26214400 bytes (26 MB) copied, 0.926656 s, 28.3 MB/s 4+0 records in 4+0 records out 16380 bytes (16 kB) copied, 0.222977 s, 73.5 kB/s llite.lustre-ffff8800aaaac800.unaligned_dio=0 testing disabling unaligned DIO - 'invalid argument' expected: dd: error reading '/mnt/lustre/f119e.sanity.1': Invalid argument 0+0 records in 0+0 records out 0 bytes (0 B) copied, 0.000979669 s, 0.0 kB/s llite.lustre-ffff8800aaaac800.unaligned_dio=1 Read/write with DIO at size 1044480 25+1 records in 25+1 records out 26214400 bytes (26 MB) copied, 1.61977 s, 16.2 MB/s -rw-r--r-- 1 root root 26214400 Apr 18 20:49 /mnt/lustre/f119e.sanity.1 -rw-r--r-- 1 root root 26214400 Apr 18 20:49 /mnt/lustre/f119e.sanity.2 /mnt/lustre/f119e.sanity.2 has type file OK /mnt/lustre/f119e.sanity.2 has size 26214400 OK Read/write with DIO at size 1048576 25+0 records in 25+0 records out 26214400 bytes (26 MB) copied, 1.45639 s, 18.0 MB/s -rw-r--r-- 1 root root 26214400 Apr 18 20:49 /mnt/lustre/f119e.sanity.1 -rw-r--r-- 1 root root 26214400 Apr 18 20:49 /mnt/lustre/f119e.sanity.2 /mnt/lustre/f119e.sanity.2 has type file OK /mnt/lustre/f119e.sanity.2 has size 26214400 OK Read/write with DIO at size 1049600 24+1 records in 24+1 records out 26214400 bytes (26 MB) copied, 1.51169 s, 17.3 MB/s -rw-r--r-- 1 root root 26214400 Apr 18 20:49 /mnt/lustre/f119e.sanity.1 -rw-r--r-- 1 root root 26214400 Apr 18 20:49 /mnt/lustre/f119e.sanity.2 /mnt/lustre/f119e.sanity.2 has type file OK /mnt/lustre/f119e.sanity.2 has size 26214400 OK llite.lustre-ffff8800aaaac800.unaligned_dio=1 PASS 119e (11s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 119f: dio vs dio race ===================== 20:49:53 (1713487793) 1+0 records in 1+0 records out 26214400 bytes (26 MB) copied, 0.905944 s, 28.9 MB/s bs: 1044480 12+1 records in 12+1 records out 26214400 bytes (26 MB) copied, 1.26235 s, 20.8 MB/s 25+1 records in 25+1 records out 26214400 bytes (26 MB) copied, 1.85738 s, 14.1 MB/s /mnt/lustre/f119f.sanity.2 has type file OK /mnt/lustre/f119f.sanity.2 has size 26214400 OK bs: 1048576 12+1 records in 12+1 records out 26214400 bytes (26 MB) copied, 1.11167 s, 23.6 MB/s 25+0 records in 25+0 records out 26214400 bytes (26 MB) copied, 1.58227 s, 16.6 MB/s /mnt/lustre/f119f.sanity.2 has type file OK /mnt/lustre/f119f.sanity.2 has size 26214400 OK bs: 1049600 12+1 records in 12+1 records out 26214400 bytes (26 MB) copied, 1.20585 s, 21.7 MB/s 24+1 records in 24+1 records out 26214400 bytes (26 MB) copied, 1.73677 s, 15.1 MB/s /mnt/lustre/f119f.sanity.2 has type file OK /mnt/lustre/f119f.sanity.2 has size 26214400 OK PASS 119f (12s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 119g: dio vs buffered I/O race ============ 20:50:06 (1713487806) 1+0 records in 1+0 records out 26214400 bytes (26 MB) copied, 0.936236 s, 28.0 MB/s bs: 1044480 12+1 records in 12+1 records out 26214400 bytes (26 MB) copied, 1.64397 s, 15.9 MB/s 25+1 records in 25+1 records out 26214400 bytes (26 MB) copied, 2.50939 s, 10.4 MB/s /mnt/lustre/f119g.sanity.2 has type file OK /mnt/lustre/f119g.sanity.2 has size 26214400 OK bs: 1048576 12+1 records in 12+1 records out 26214400 bytes (26 MB) copied, 0.767337 s, 34.2 MB/s 25+0 records in 25+0 records out 26214400 bytes (26 MB) copied, 1.69282 s, 15.5 MB/s /mnt/lustre/f119g.sanity.2 has type file OK /mnt/lustre/f119g.sanity.2 has size 26214400 OK bs: 1049600 12+1 records in 12+1 records out 26214400 bytes (26 MB) copied, 0.87141 s, 30.1 MB/s 24+1 records in 24+1 records out 26214400 bytes (26 MB) copied, 1.86849 s, 14.0 MB/s /mnt/lustre/f119g.sanity.2 has type file OK /mnt/lustre/f119g.sanity.2 has size 26214400 OK PASS 119g (12s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 119h: basic tests of memory unaligned dio ========================================================== 20:50:20 (1713487820) unaligned writes of blocksize: 1044480 unaligned writes of blocksize: 1048576 unaligned writes of blocksize: 1049600 5+0 records in 5+0 records out 26214400 bytes (26 MB) copied, 0.691727 s, 37.9 MB/s unaligned reads of blocksize: 1044480 unaligned reads of blocksize: 1048576 unaligned reads of blocksize: 1049600 PASS 119h (10s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 119i: test unaligned aio at varying sizes ========================================================== 20:50:32 (1713487832) /home/green/git/lustre-release/lustre/tests/aiocp 1+0 records in 1+0 records out 26214400 bytes (26 MB) copied, 0.958538 s, 27.3 MB/s bs: 1044480, align: 8, file_size 26214400 /mnt/lustre/f119i.sanity.2 has type file OK /mnt/lustre/f119i.sanity.2 has size 26214400 OK bs: 1048576, align: 8, file_size 26214400 /mnt/lustre/f119i.sanity.2 has type file OK /mnt/lustre/f119i.sanity.2 has size 26214400 OK bs: 1049600, align: 8, file_size 26214400 /mnt/lustre/f119i.sanity.2 has type file OK /mnt/lustre/f119i.sanity.2 has size 26214400 OK bs: 1044480, align: 512, file_size 26214400 /mnt/lustre/f119i.sanity.2 has type file OK /mnt/lustre/f119i.sanity.2 has size 26214400 OK bs: 1048576, align: 512, file_size 26214400 /mnt/lustre/f119i.sanity.2 has type file OK /mnt/lustre/f119i.sanity.2 has size 26214400 OK bs: 1049600, align: 512, file_size 26214400 /mnt/lustre/f119i.sanity.2 has type file OK /mnt/lustre/f119i.sanity.2 has size 26214400 OK bs: 1044480, align: 4096, file_size 26214400 /mnt/lustre/f119i.sanity.2 has type file OK /mnt/lustre/f119i.sanity.2 has size 26214400 OK bs: 1048576, align: 4096, file_size 26214400 /mnt/lustre/f119i.sanity.2 has type file OK /mnt/lustre/f119i.sanity.2 has size 26214400 OK bs: 1049600, align: 4096, file_size 26214400 /mnt/lustre/f119i.sanity.2 has type file OK /mnt/lustre/f119i.sanity.2 has size 26214400 OK PASS 119i (17s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 120a: Early Lock Cancel: mkdir test ======= 20:50:51 (1713487851) ldlm.namespaces.lustre-MDT0000-mdc-ffff8800aaaac800.lru_size=400 ldlm.namespaces.lustre-OST0000-osc-ffff8800aaaac800.lru_size=400 ldlm.namespaces.lustre-OST0001-osc-ffff8800aaaac800.lru_size=400 ldlm.namespaces.lustre-MDT0000-mdc-ffff8800aaaac800.lru_size=0 ldlm.namespaces.lustre-OST0000-osc-ffff8800aaaac800.lru_size=0 ldlm.namespaces.lustre-OST0001-osc-ffff8800aaaac800.lru_size=0 PASS 120a (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 120b: Early Lock Cancel: create test ====== 20:50:54 (1713487854) ldlm.namespaces.lustre-MDT0000-mdc-ffff8800aaaac800.lru_size=400 ldlm.namespaces.lustre-OST0000-osc-ffff8800aaaac800.lru_size=400 ldlm.namespaces.lustre-OST0001-osc-ffff8800aaaac800.lru_size=400 ldlm.namespaces.lustre-MDT0000-mdc-ffff8800aaaac800.lru_size=0 ldlm.namespaces.lustre-OST0000-osc-ffff8800aaaac800.lru_size=0 ldlm.namespaces.lustre-OST0001-osc-ffff8800aaaac800.lru_size=0 PASS 120b (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 120c: Early Lock Cancel: link test ======== 20:50:58 (1713487858) ldlm.namespaces.lustre-MDT0000-mdc-ffff8800aaaac800.lru_size=400 ldlm.namespaces.lustre-OST0000-osc-ffff8800aaaac800.lru_size=400 ldlm.namespaces.lustre-OST0001-osc-ffff8800aaaac800.lru_size=400 ldlm.namespaces.lustre-MDT0000-mdc-ffff8800aaaac800.lru_size=0 ldlm.namespaces.lustre-OST0000-osc-ffff8800aaaac800.lru_size=0 ldlm.namespaces.lustre-OST0001-osc-ffff8800aaaac800.lru_size=0 PASS 120c (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 120d: Early Lock Cancel: setattr test ===== 20:51:02 (1713487862) ldlm.namespaces.lustre-MDT0000-mdc-ffff8800aaaac800.lru_size=400 ldlm.namespaces.lustre-OST0000-osc-ffff8800aaaac800.lru_size=400 ldlm.namespaces.lustre-OST0001-osc-ffff8800aaaac800.lru_size=400 ldlm.namespaces.lustre-MDT0000-mdc-ffff8800aaaac800.lru_size=0 ldlm.namespaces.lustre-OST0000-osc-ffff8800aaaac800.lru_size=0 ldlm.namespaces.lustre-OST0001-osc-ffff8800aaaac800.lru_size=0 PASS 120d (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 120e: Early Lock Cancel: unlink test ====== 20:51:06 (1713487866) ldlm.namespaces.lustre-MDT0000-mdc-ffff8800aaaac800.lru_size=400 ldlm.namespaces.lustre-OST0000-osc-ffff8800aaaac800.lru_size=400 ldlm.namespaces.lustre-OST0001-osc-ffff8800aaaac800.lru_size=400 1+0 records in 1+0 records out 512 bytes (512 B) copied, 0.00476423 s, 107 kB/s 1+0 records in 1+0 records out 512 bytes (512 B) copied, 0.00738479 s, 69.3 kB/s ldlm.namespaces.lustre-MDT0000-mdc-ffff8800aaaac800.lru_size=0 ldlm.namespaces.lustre-OST0000-osc-ffff8800aaaac800.lru_size=0 ldlm.namespaces.lustre-OST0001-osc-ffff8800aaaac800.lru_size=0 PASS 120e (9s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 120f: Early Lock Cancel: rename test ====== 20:51:17 (1713487877) ldlm.namespaces.lustre-MDT0000-mdc-ffff8800aaaac800.lru_size=400 ldlm.namespaces.lustre-OST0000-osc-ffff8800aaaac800.lru_size=400 ldlm.namespaces.lustre-OST0001-osc-ffff8800aaaac800.lru_size=400 1+0 records in 1+0 records out 512 bytes (512 B) copied, 0.00602734 s, 84.9 kB/s 1+0 records in 1+0 records out 512 bytes (512 B) copied, 0.00629906 s, 81.3 kB/s 1+0 records in 1+0 records out 512 bytes (512 B) copied, 0.00780384 s, 65.6 kB/s 1+0 records in 1+0 records out 512 bytes (512 B) copied, 0.00833441 s, 61.4 kB/s ldlm.namespaces.lustre-MDT0000-mdc-ffff8800aaaac800.lru_size=0 ldlm.namespaces.lustre-OST0000-osc-ffff8800aaaac800.lru_size=0 ldlm.namespaces.lustre-OST0001-osc-ffff8800aaaac800.lru_size=0 PASS 120f (10s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 120g: Early Lock Cancel: performance test ========================================================== 20:51:28 (1713487888) ldlm.namespaces.lustre-MDT0000-mdc-ffff8800aaaac800.lru_size=400 ldlm.namespaces.lustre-OST0000-osc-ffff8800aaaac800.lru_size=400 ldlm.namespaces.lustre-OST0001-osc-ffff8800aaaac800.lru_size=400 create 10000 files - open/close 4963 (time 1713487900.11 total 10.00 last 496.21) total: 10000 open/close in 19.95 seconds: 501.22 ops/second total: 0 cancels, 0 blockings rm 10000 files total: 10000 removes in 59 total: 0 cancels, 0 blockings ldlm.namespaces.lustre-MDT0000-mdc-ffff8800aaaac800.lru_size=0 ldlm.namespaces.lustre-OST0000-osc-ffff8800aaaac800.lru_size=0 ldlm.namespaces.lustre-OST0001-osc-ffff8800aaaac800.lru_size=0 PASS 120g (84s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 121: read cancel race ===================== 20:52:54 (1713487974) fail_loc=0x310 fail_loc=0 PASS 121 (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 123aa: verify statahead work ============== 20:52:57 (1713487977) seq.cli-lustre-OST0000-super.width=0x1ffffff seq.cli-lustre-OST0001-super.width=0x1ffffff kvm mdc.lustre-MDT0000-mdc-ffff8800aaaac800.batch_stats=0 total: 100 open/close in 0.51 seconds: 196.68 ops/second llite.lustre-ffff8800aaaac800.statahead_max=0 101 real 0m0.528s user 0m0.004s sys 0m0.203s ls -l 100 files without statahead: 1 sec llite.lustre-ffff8800aaaac800.statahead_max=128 128 101 real 0m0.275s user 0m0.001s sys 0m0.121s ls -l 100 files with statahead: 0 sec statahead total: 26 statahead wrong: 0 agl total: 26 list_total: 26 fname_total: 0 hit_total: 558 miss_total: 136 total: 900 open/close in 1.86 seconds: 482.62 ops/second llite.lustre-ffff8800aaaac800.statahead_max=0 1001 real 0m5.235s user 0m0.027s sys 0m2.025s ls -l 1000 files without statahead: 5 sec llite.lustre-ffff8800aaaac800.statahead_max=128 128 1001 real 0m1.097s user 0m0.015s sys 0m0.853s ls -l 1000 files with statahead: 1 sec statahead total: 27 statahead wrong: 0 agl total: 27 list_total: 27 fname_total: 0 hit_total: 1557 miss_total: 137 - open/close 5251 (time 1713488002.28 total 10.00 last 525.06) total: 9000 open/close in 17.28 seconds: 520.79 ops/second llite.lustre-ffff8800aaaac800.statahead_max=0 10001 real 0m48.611s user 0m0.276s sys 0m17.856s ls -l 10000 files without statahead: 49 sec llite.lustre-ffff8800aaaac800.statahead_max=128 128 10001 real 0m10.587s user 0m0.205s sys 0m8.444s ls -l 10000 files with statahead: 10 sec statahead total: 28 statahead wrong: 0 agl total: 28 list_total: 28 fname_total: 0 hit_total: 11556 miss_total: 138 ls -l done rm -r /mnt/lustre/d123aa.sanity/: 36 seconds rm done statahead total: 28 statahead wrong: 0 agl total: 28 list_total: 28 fname_total: 0 hit_total: 11556 miss_total: 138 mdc.lustre-MDT0000-mdc-ffff8800aaaac800.batch_stats= snapshot_time: 1713488110.787627774 secs.nsecs start_time: 1713487978.902944578 secs.nsecs elapsed_time: 131.884683196 secs.nsecs subreqs per batch batches % cum % 1: 142 32 32 2: 31 7 39 4: 55 12 51 8: 40 9 61 16: 6 1 62 32: 2 0 62 64: 163 37 100 seq.cli-lustre-OST0000-super.width=65536 seq.cli-lustre-OST0001-super.width=65536 PASS 123aa (135s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 123ab: verify statahead work by using statx ========================================================== 20:55:14 (1713488114) SKIP: sanity test_123ab Test must be statx() syscall supported SKIP 123ab (0s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 123ac: verify statahead work by using statx without glimpse RPCs ========================================================== 20:55:16 (1713488116) SKIP: sanity test_123ac Test must be statx() syscall supported SKIP 123ac (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 123ad: Verify batching statahead works correctly ========================================================== 20:55:18 (1713488118) batching: statahead_max=32 statahead_batch_max=32 mdc.lustre-MDT0000-mdc-ffff8800aaaac800.batch_stats=0 llite.lustre-ffff8800aaaac800.statahead_max=32 llite.lustre-ffff8800aaaac800.statahead_batch_max=32 seq.cli-lustre-OST0000-super.width=0x1ffffff seq.cli-lustre-OST0001-super.width=0x1ffffff kvm mdc.lustre-MDT0000-mdc-ffff8800aaaac800.batch_stats=0 total: 100 open/close in 0.49 seconds: 202.70 ops/second llite.lustre-ffff8800aaaac800.statahead_max=0 101 real 0m0.468s user 0m0.002s sys 0m0.180s ls -l 100 files without statahead: 0 sec llite.lustre-ffff8800aaaac800.statahead_max=32 32 101 real 0m0.327s user 0m0.004s sys 0m0.115s ls -l 100 files with statahead: 0 sec statahead total: 29 statahead wrong: 0 agl total: 29 list_total: 29 fname_total: 0 hit_total: 11655 miss_total: 139 total: 900 open/close in 1.65 seconds: 545.59 ops/second llite.lustre-ffff8800aaaac800.statahead_max=0 1001 real 0m4.899s user 0m0.019s sys 0m1.864s ls -l 1000 files without statahead: 5 sec llite.lustre-ffff8800aaaac800.statahead_max=32 32 1001 real 0m1.939s user 0m0.017s sys 0m0.918s ls -l 1000 files with statahead: 2 sec statahead total: 30 statahead wrong: 0 agl total: 30 list_total: 30 fname_total: 0 hit_total: 12654 miss_total: 140 - open/close 5489 (time 1713488143.27 total 10.00 last 548.90) total: 9000 open/close in 16.93 seconds: 531.59 ops/second llite.lustre-ffff8800aaaac800.statahead_max=0 10001 real 0m50.265s user 0m0.268s sys 0m18.298s ls -l 10000 files without statahead: 50 sec llite.lustre-ffff8800aaaac800.statahead_max=32 32 10001 real 0m19.910s user 0m0.236s sys 0m8.711s ls -l 10000 files with statahead: 19 sec statahead total: 31 statahead wrong: 0 agl total: 31 list_total: 31 fname_total: 0 hit_total: 22653 miss_total: 141 ls -l done rm -r /mnt/lustre/d123ad.sanity/: 37 seconds rm done statahead total: 31 statahead wrong: 0 agl total: 31 list_total: 31 fname_total: 0 hit_total: 22653 miss_total: 141 mdc.lustre-MDT0000-mdc-ffff8800aaaac800.batch_stats= snapshot_time: 1713488263.741255496 secs.nsecs start_time: 1713488119.801354192 secs.nsecs elapsed_time: 143.939901304 secs.nsecs subreqs per batch batches % cum % 1: 1 0 0 2: 0 0 0 4: 0 0 0 8: 4 1 1 16: 4 1 2 32: 346 97 100 - open/close 5307 (time 1713488274.34 total 10.00 last 530.66) total: 10000 open/close in 19.11 seconds: 523.41 ops/second llite.lustre-ffff8800aaaac800.statahead_batch_max=0 llite.lustre-ffff8800aaaac800.statahead_stats=clear mdc.lustre-MDT0000-mdc-ffff8800aaaac800.stats=clear 10001 real 0m10.174s user 0m0.225s sys 0m9.277s llite.lustre-ffff8800aaaac800.statahead_batch_max=32 llite.lustre-ffff8800aaaac800.statahead_stats=clear mdc.lustre-MDT0000-mdc-ffff8800aaaac800.batch_stats=clear mdc.lustre-MDT0000-mdc-ffff8800aaaac800.stats=clear 10001 real 0m20.574s user 0m0.220s sys 0m8.902s unbatched RPCs: 10004, batched RPCs: 315 mdc.lustre-MDT0000-mdc-ffff8800aaaac800.batch_stats= snapshot_time: 1713488318.561503758 secs.nsecs start_time: 1713488295.547775998 secs.nsecs elapsed_time: 23.013727760 secs.nsecs subreqs per batch batches % cum % 1: 0 0 0 2: 0 0 0 4: 0 0 0 8: 1 0 0 16: 2 0 0 32: 312 99 100 batching: statahead_max=2048 statahead_batch_max=256 mdc.lustre-MDT0000-mdc-ffff8800aaaac800.batch_stats=0 llite.lustre-ffff8800aaaac800.statahead_max=2048 llite.lustre-ffff8800aaaac800.statahead_batch_max=256 seq.cli-lustre-OST0000-super.width=0x1ffffff seq.cli-lustre-OST0001-super.width=0x1ffffff kvm mdc.lustre-MDT0000-mdc-ffff8800aaaac800.batch_stats=0 total: 100 open/close in 0.53 seconds: 187.15 ops/second llite.lustre-ffff8800aaaac800.statahead_max=0 101 real 0m0.518s user 0m0.004s sys 0m0.199s ls -l 100 files without statahead: 1 sec llite.lustre-ffff8800aaaac800.statahead_max=2048 2048 101 real 0m0.269s user 0m0.004s sys 0m0.109s ls -l 100 files with statahead: 1 sec statahead total: 2 statahead wrong: 0 agl total: 2 list_total: 2 fname_total: 0 hit_total: 10098 miss_total: 2 total: 900 open/close in 1.71 seconds: 525.80 ops/second llite.lustre-ffff8800aaaac800.statahead_max=0 1001 real 0m4.982s user 0m0.024s sys 0m1.862s ls -l 1000 files without statahead: 5 sec llite.lustre-ffff8800aaaac800.statahead_max=2048 2048 1001 real 0m1.107s user 0m0.024s sys 0m0.783s ls -l 1000 files with statahead: 1 sec statahead total: 3 statahead wrong: 0 agl total: 3 list_total: 3 fname_total: 0 hit_total: 11097 miss_total: 3 - open/close 4837 (time 1713488380.20 total 10.00 last 483.66) total: 9000 open/close in 17.88 seconds: 503.37 ops/second llite.lustre-ffff8800aaaac800.statahead_max=0 10001 real 1m18.597s user 0m0.503s sys 0m23.670s ls -l 10000 files without statahead: 78 sec llite.lustre-ffff8800aaaac800.statahead_max=2048 2048 10001 real 0m14.089s user 0m0.309s sys 0m9.711s ls -l 10000 files with statahead: 14 sec statahead total: 4 statahead wrong: 0 agl total: 4 list_total: 4 fname_total: 0 hit_total: 21096 miss_total: 4 ls -l done rm -r /mnt/lustre/d123ad.sanity/: 55 seconds rm done statahead total: 4 statahead wrong: 0 agl total: 4 list_total: 4 fname_total: 0 hit_total: 21096 miss_total: 4 mdc.lustre-MDT0000-mdc-ffff8800aaaac800.batch_stats= snapshot_time: 1713488543.939419500 secs.nsecs start_time: 1713488319.269671603 secs.nsecs elapsed_time: 224.669747897 secs.nsecs subreqs per batch batches % cum % 1: 162 55 55 2: 18 6 61 4: 15 5 66 8: 16 5 72 16: 22 7 79 32: 14 4 84 64: 6 2 86 128: 0 0 86 256: 39 13 100 - open/close 5581 (time 1713488554.57 total 10.00 last 558.03) total: 10000 open/close in 18.66 seconds: 535.88 ops/second llite.lustre-ffff8800aaaac800.statahead_batch_max=0 llite.lustre-ffff8800aaaac800.statahead_stats=clear mdc.lustre-MDT0000-mdc-ffff8800aaaac800.stats=clear 10001 real 0m10.216s user 0m0.198s sys 0m8.829s llite.lustre-ffff8800aaaac800.statahead_batch_max=256 llite.lustre-ffff8800aaaac800.statahead_stats=clear mdc.lustre-MDT0000-mdc-ffff8800aaaac800.batch_stats=clear mdc.lustre-MDT0000-mdc-ffff8800aaaac800.stats=clear 10001 real 0m9.849s user 0m0.200s sys 0m8.320s unbatched RPCs: 10004, batched RPCs: 364 mdc.lustre-MDT0000-mdc-ffff8800aaaac800.batch_stats= snapshot_time: 1713488587.474377996 secs.nsecs start_time: 1713488575.294843838 secs.nsecs elapsed_time: 12.179534158 secs.nsecs subreqs per batch batches % cum % 1: 253 69 69 2: 14 3 73 4: 13 3 76 8: 14 3 80 16: 20 5 86 32: 13 3 89 64: 2 0 90 128: 0 0 90 256: 35 9 100 seq.cli-lustre-OST0000-super.width=33554431 seq.cli-lustre-OST0001-super.width=33554431 seq.cli-lustre-OST0000-super.width=65536 seq.cli-lustre-OST0001-super.width=65536 llite.lustre-ffff8800aaaac800.statahead_batch_max=64 llite.lustre-ffff8800aaaac800.statahead_max=128 PASS 123ad (509s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 123b: not panic with network error in statahead enqueue (bug 15027) ========================================================== 21:03:49 (1713488629) total: 1000 open/close in 1.86 seconds: 536.22 ops/second fail_loc=0x80000803 ls done fail_loc=0x0 statahead total: 2 statahead wrong: 0 agl total: 2 list_total: 2 fname_total: 0 hit_total: 10998 miss_total: 2 PASS 123b (9s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 123c: Can not initialize inode warning on DNE statahead ========================================================== 21:04:00 (1713488640) SKIP: sanity test_123c needs >= 2 MDTs SKIP 123c (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 123d: Statahead on striped directories works correctly ========================================================== 21:04:03 (1713488643) total: 100 mkdir in 0.26 seconds: 389.22 ops/second Stopping client oleg216-client.virtnet /mnt/lustre (opts:) Starting client: oleg216-client.virtnet: -o user_xattr,flock oleg216-server@tcp:/lustre /mnt/lustre llite.lustre-ffff8800b3e45000.statahead_max=128 llite.lustre-ffff8800b3e45000.statahead_stats=0 total 50 drwxr-xr-x 2 root root 512 Apr 18 21:04 f123d.sanity0 drwxr-xr-x 2 root root 512 Apr 18 21:04 f123d.sanity1 drwxr-xr-x 2 root root 512 Apr 18 21:04 f123d.sanity10 drwxr-xr-x 2 root root 512 Apr 18 21:04 f123d.sanity11 drwxr-xr-x 2 root root 512 Apr 18 21:04 f123d.sanity12 drwxr-xr-x 2 root root 512 Apr 18 21:04 f123d.sanity13 drwxr-xr-x 2 root root 512 Apr 18 21:04 f123d.sanity14 drwxr-xr-x 2 root root 512 Apr 18 21:04 f123d.sanity15 drwxr-xr-x 2 root root 512 Apr 18 21:04 f123d.sanity16 drwxr-xr-x 2 root root 512 Apr 18 21:04 f123d.sanity17 drwxr-xr-x 2 root root 512 Apr 18 21:04 f123d.sanity18 drwxr-xr-x 2 root root 512 Apr 18 21:04 f123d.sanity19 drwxr-xr-x 2 root root 512 Apr 18 21:04 f123d.sanity2 drwxr-xr-x 2 root root 512 Apr 18 21:04 f123d.sanity20 drwxr-xr-x 2 root root 512 Apr 18 21:04 f123d.sanity21 drwxr-xr-x 2 root root 512 Apr 18 21:04 f123d.sanity22 drwxr-xr-x 2 root root 512 Apr 18 21:04 f123d.sanity23 drwxr-xr-x 2 root root 512 Apr 18 21:04 f123d.sanity24 drwxr-xr-x 2 root root 512 Apr 18 21:04 f123d.sanity25 drwxr-xr-x 2 root root 512 Apr 18 21:04 f123d.sanity26 drwxr-xr-x 2 root root 512 Apr 18 21:04 f123d.sanity27 drwxr-xr-x 2 root root 512 Apr 18 21:04 f123d.sanity28 drwxr-xr-x 2 root root 512 Apr 18 21:04 f123d.sanity29 drwxr-xr-x 2 root root 512 Apr 18 21:04 f123d.sanity3 drwxr-xr-x 2 root root 512 Apr 18 21:04 f123d.sanity30 drwxr-xr-x 2 root root 512 Apr 18 21:04 f123d.sanity31 drwxr-xr-x 2 root root 512 Apr 18 21:04 f123d.sanity32 drwxr-xr-x 2 root root 512 Apr 18 21:04 f123d.sanity33 drwxr-xr-x 2 root root 512 Apr 18 21:04 f123d.sanity34 drwxr-xr-x 2 root root 512 Apr 18 21:04 f123d.sanity35 drwxr-xr-x 2 root root 512 Apr 18 21:04 f123d.sanity36 drwxr-xr-x 2 root root 512 Apr 18 21:04 f123d.sanity37 drwxr-xr-x 2 root root 512 Apr 18 21:04 f123d.sanity38 drwxr-xr-x 2 root root 512 Apr 18 21:04 f123d.sanity39 drwxr-xr-x 2 root root 512 Apr 18 21:04 f123d.sanity4 drwxr-xr-x 2 root root 512 Apr 18 21:04 f123d.sanity40 drwxr-xr-x 2 root root 512 Apr 18 21:04 f123d.sanity41 drwxr-xr-x 2 root root 512 Apr 18 21:04 f123d.sanity42 drwxr-xr-x 2 root root 512 Apr 18 21:04 f123d.sanity43 drwxr-xr-x 2 root root 512 Apr 18 21:04 f123d.sanity44 drwxr-xr-x 2 root root 512 Apr 18 21:04 f123d.sanity45 drwxr-xr-x 2 root root 512 Apr 18 21:04 f123d.sanity46 drwxr-xr-x 2 root root 512 Apr 18 21:04 f123d.sanity47 drwxr-xr-x 2 root root 512 Apr 18 21:04 f123d.sanity48 drwxr-xr-x 2 root root 512 Apr 18 21:04 f123d.sanity49 drwxr-xr-x 2 root root 512 Apr 18 21:04 f123d.sanity5 drwxr-xr-x 2 root root 512 Apr 18 21:04 f123d.sanity50 drwxr-xr-x 2 root root 512 Apr 18 21:04 f123d.sanity51 drwxr-xr-x 2 root root 512 Apr 18 21:04 f123d.sanity52 drwxr-xr-x 2 root root 512 Apr 18 21:04 f123d.sanity53 drwxr-xr-x 2 root root 512 Apr 18 21:04 f123d.sanity54 drwxr-xr-x 2 root root 512 Apr 18 21:04 f123d.sanity55 drwxr-xr-x 2 root root 512 Apr 18 21:04 f123d.sanity56 drwxr-xr-x 2 root root 512 Apr 18 21:04 f123d.sanity57 drwxr-xr-x 2 root root 512 Apr 18 21:04 f123d.sanity58 drwxr-xr-x 2 root root 512 Apr 18 21:04 f123d.sanity59 drwxr-xr-x 2 root root 512 Apr 18 21:04 f123d.sanity6 drwxr-xr-x 2 root root 512 Apr 18 21:04 f123d.sanity60 drwxr-xr-x 2 root root 512 Apr 18 21:04 f123d.sanity61 drwxr-xr-x 2 root root 512 Apr 18 21:04 f123d.sanity62 drwxr-xr-x 2 root root 512 Apr 18 21:04 f123d.sanity63 drwxr-xr-x 2 root root 512 Apr 18 21:04 f123d.sanity64 drwxr-xr-x 2 root root 512 Apr 18 21:04 f123d.sanity65 drwxr-xr-x 2 root root 512 Apr 18 21:04 f123d.sanity66 drwxr-xr-x 2 root root 512 Apr 18 21:04 f123d.sanity67 drwxr-xr-x 2 root root 512 Apr 18 21:04 f123d.sanity68 drwxr-xr-x 2 root root 512 Apr 18 21:04 f123d.sanity69 drwxr-xr-x 2 root root 512 Apr 18 21:04 f123d.sanity7 drwxr-xr-x 2 root root 512 Apr 18 21:04 f123d.sanity70 drwxr-xr-x 2 root root 512 Apr 18 21:04 f123d.sanity71 drwxr-xr-x 2 root root 512 Apr 18 21:04 f123d.sanity72 drwxr-xr-x 2 root root 512 Apr 18 21:04 f123d.sanity73 drwxr-xr-x 2 root root 512 Apr 18 21:04 f123d.sanity74 drwxr-xr-x 2 root root 512 Apr 18 21:04 f123d.sanity75 drwxr-xr-x 2 root root 512 Apr 18 21:04 f123d.sanity76 drwxr-xr-x 2 root root 512 Apr 18 21:04 f123d.sanity77 drwxr-xr-x 2 root root 512 Apr 18 21:04 f123d.sanity78 drwxr-xr-x 2 root root 512 Apr 18 21:04 f123d.sanity79 drwxr-xr-x 2 root root 512 Apr 18 21:04 f123d.sanity8 drwxr-xr-x 2 root root 512 Apr 18 21:04 f123d.sanity80 drwxr-xr-x 2 root root 512 Apr 18 21:04 f123d.sanity81 drwxr-xr-x 2 root root 512 Apr 18 21:04 f123d.sanity82 drwxr-xr-x 2 root root 512 Apr 18 21:04 f123d.sanity83 drwxr-xr-x 2 root root 512 Apr 18 21:04 f123d.sanity84 drwxr-xr-x 2 root root 512 Apr 18 21:04 f123d.sanity85 drwxr-xr-x 2 root root 512 Apr 18 21:04 f123d.sanity86 drwxr-xr-x 2 root root 512 Apr 18 21:04 f123d.sanity87 drwxr-xr-x 2 root root 512 Apr 18 21:04 f123d.sanity88 drwxr-xr-x 2 root root 512 Apr 18 21:04 f123d.sanity89 drwxr-xr-x 2 root root 512 Apr 18 21:04 f123d.sanity9 drwxr-xr-x 2 root root 512 Apr 18 21:04 f123d.sanity90 drwxr-xr-x 2 root root 512 Apr 18 21:04 f123d.sanity91 drwxr-xr-x 2 root root 512 Apr 18 21:04 f123d.sanity92 drwxr-xr-x 2 root root 512 Apr 18 21:04 f123d.sanity93 drwxr-xr-x 2 root root 512 Apr 18 21:04 f123d.sanity94 drwxr-xr-x 2 root root 512 Apr 18 21:04 f123d.sanity95 drwxr-xr-x 2 root root 512 Apr 18 21:04 f123d.sanity96 drwxr-xr-x 2 root root 512 Apr 18 21:04 f123d.sanity97 drwxr-xr-x 2 root root 512 Apr 18 21:04 f123d.sanity98 drwxr-xr-x 2 root root 512 Apr 18 21:04 f123d.sanity99 statahead total: 1 statahead wrong: 0 agl total: 1 list_total: 1 fname_total: 0 hit_total: 99 miss_total: 1 PASS 123d (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 123e: statahead with large wide striping == 21:04:08 (1713488648) llite.lustre-ffff8800b3e45000.statahead_max=2048 llite.lustre-ffff8800b3e45000.statahead_batch_max=1024 total 16016 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.0 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.1 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.10 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.100 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.1000 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.101 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.102 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.103 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.104 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.105 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.106 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.107 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.108 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.109 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.11 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.110 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.111 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.112 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.113 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.114 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.115 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.116 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.117 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.118 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.119 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.12 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.120 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.121 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.122 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.123 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.124 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.125 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.126 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.127 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.128 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.129 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.13 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.130 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.131 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.132 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.133 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.134 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.135 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.136 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.137 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.138 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.139 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.14 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.140 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.141 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.142 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.143 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.144 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.145 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.146 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.147 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.148 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.149 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.15 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.150 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.151 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.152 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.153 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.154 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.155 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.156 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.157 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.158 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.159 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.16 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.160 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.161 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.162 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.163 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.164 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.165 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.166 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.167 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.168 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.169 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.17 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.170 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.171 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.172 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.173 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.174 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.175 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.176 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.177 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.178 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.179 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.18 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.180 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.181 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.182 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.183 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.184 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.185 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.186 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.187 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.188 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.189 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.19 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.190 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.191 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.192 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.193 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.194 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.195 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.196 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.197 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.198 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.199 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.2 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.20 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.200 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.201 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.202 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.203 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.204 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.205 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.206 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.207 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.208 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.209 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.21 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.210 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.211 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.212 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.213 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.214 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.215 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.216 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.217 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.218 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.219 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.22 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.220 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.221 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.222 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.223 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.224 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.225 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.226 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.227 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.228 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.229 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.23 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.230 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.231 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.232 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.233 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.234 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.235 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.236 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.237 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.238 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.239 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.24 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.240 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.241 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.242 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.243 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.244 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.245 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.246 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.247 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.248 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.249 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.25 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.250 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.251 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.252 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.253 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.254 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.255 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.256 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.257 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.258 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.259 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.26 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.260 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.261 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.262 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.263 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.264 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.265 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.266 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.267 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.268 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.269 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.27 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.270 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.271 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.272 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.273 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.274 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.275 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.276 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.277 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.278 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.279 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.28 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.280 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.281 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.282 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.283 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.284 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.285 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.286 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.287 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.288 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.289 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.29 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.290 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.291 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.292 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.293 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.294 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.295 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.296 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.297 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.298 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.299 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.3 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.30 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.300 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.301 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.302 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.303 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.304 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.305 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.306 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.307 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.308 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.309 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.31 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.310 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.311 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.312 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.313 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.314 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.315 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.316 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.317 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.318 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.319 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.32 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.320 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.321 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.322 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.323 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.324 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.325 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.326 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.327 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.328 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.329 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.33 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.330 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.331 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.332 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.333 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.334 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.335 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.336 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.337 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.338 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.339 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.34 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.340 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.341 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.342 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.343 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.344 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.345 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.346 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.347 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.348 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.349 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.35 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.350 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.351 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.352 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.353 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.354 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.355 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.356 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.357 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.358 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.359 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.36 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.360 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.361 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.362 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.363 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.364 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.365 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.366 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.367 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.368 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.369 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.37 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.370 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.371 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.372 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.373 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.374 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.375 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.376 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.377 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.378 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.379 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.38 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.380 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.381 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.382 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.383 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.384 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.385 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.386 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.387 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.388 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.389 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.39 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.390 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.391 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.392 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.393 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.394 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.395 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.396 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.397 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.398 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.399 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.4 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.40 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.400 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.401 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.402 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.403 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.404 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.405 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.406 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.407 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.408 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.409 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.41 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.410 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.411 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.412 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.413 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.414 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.415 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.416 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.417 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.418 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.419 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.42 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.420 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.421 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.422 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.423 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.424 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.425 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.426 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.427 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.428 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.429 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.43 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.430 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.431 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.432 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.433 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.434 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.435 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.436 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.437 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.438 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.439 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.44 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.440 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.441 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.442 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.443 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.444 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.445 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.446 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.447 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.448 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.449 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.45 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.450 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.451 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.452 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.453 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.454 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.455 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.456 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.457 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.458 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.459 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.46 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.460 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.461 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.462 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.463 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.464 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.465 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.466 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.467 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.468 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.469 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.47 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.470 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.471 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.472 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.473 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.474 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.475 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.476 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.477 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.478 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.479 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.48 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.480 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.481 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.482 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.483 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.484 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.485 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.486 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.487 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.488 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.489 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.49 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.490 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.491 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.492 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.493 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.494 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.495 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.496 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.497 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.498 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.499 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.5 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.50 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.500 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.501 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.502 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.503 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.504 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.505 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.506 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.507 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.508 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.509 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.51 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.510 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.511 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.512 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.513 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.514 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.515 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.516 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.517 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.518 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.519 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.52 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.520 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.521 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.522 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.523 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.524 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.525 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.526 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.527 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.528 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.529 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.53 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.530 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.531 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.532 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.533 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.534 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.535 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.536 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.537 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.538 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.539 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.54 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.540 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.541 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.542 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.543 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.544 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.545 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.546 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.547 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.548 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.549 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.55 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.550 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.551 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.552 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.553 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.554 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.555 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.556 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.557 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.558 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.559 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.56 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.560 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.561 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.562 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.563 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.564 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.565 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.566 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.567 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.568 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.569 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.57 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.570 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.571 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.572 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.573 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.574 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.575 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.576 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.577 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.578 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.579 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.58 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.580 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.581 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.582 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.583 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.584 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.585 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.586 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.587 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.588 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.589 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.59 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.590 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.591 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.592 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.593 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.594 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.595 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.596 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.597 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.598 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.599 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.6 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.60 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.600 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.601 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.602 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.603 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.604 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.605 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.606 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.607 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.608 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.609 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.61 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.610 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.611 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.612 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.613 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.614 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.615 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.616 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.617 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.618 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.619 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.62 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.620 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.621 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.622 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.623 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.624 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.625 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.626 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.627 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.628 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.629 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.63 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.630 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.631 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.632 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.633 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.634 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.635 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.636 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.637 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.638 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.639 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.64 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.640 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.641 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.642 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.643 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.644 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.645 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.646 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.647 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.648 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.649 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.65 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.650 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.651 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.652 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.653 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.654 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.655 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.656 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.657 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.658 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.659 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.66 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.660 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.661 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.662 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.663 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.664 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.665 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.666 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.667 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.668 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.669 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.67 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.670 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.671 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.672 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.673 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.674 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.675 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.676 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.677 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.678 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.679 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.68 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.680 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.681 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.682 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.683 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.684 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.685 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.686 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.687 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.688 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.689 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.69 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.690 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.691 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.692 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.693 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.694 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.695 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.696 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.697 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.698 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.699 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.7 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.70 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.700 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.701 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.702 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.703 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.704 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.705 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.706 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.707 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.708 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.709 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.71 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.710 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.711 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.712 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.713 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.714 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.715 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.716 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.717 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.718 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.719 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.72 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.720 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.721 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.722 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.723 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.724 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.725 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.726 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.727 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.728 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.729 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.73 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.730 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.731 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.732 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.733 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.734 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.735 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.736 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.737 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.738 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.739 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.74 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.740 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.741 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.742 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.743 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.744 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.745 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.746 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.747 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.748 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.749 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.75 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.750 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.751 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.752 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.753 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.754 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.755 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.756 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.757 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.758 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.759 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.76 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.760 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.761 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.762 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.763 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.764 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.765 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.766 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.767 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.768 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.769 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.77 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.770 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.771 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.772 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.773 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.774 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.775 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.776 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.777 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.778 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.779 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.78 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.780 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.781 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.782 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.783 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.784 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.785 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.786 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.787 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.788 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.789 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.79 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.790 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.791 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.792 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.793 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.794 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.795 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.796 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.797 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.798 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.799 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.8 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.80 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.800 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.801 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.802 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.803 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.804 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.805 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.806 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.807 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.808 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.809 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.81 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.810 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.811 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.812 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.813 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.814 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.815 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.816 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.817 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.818 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.819 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.82 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.820 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.821 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.822 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.823 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.824 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.825 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.826 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.827 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.828 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.829 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.83 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.830 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.831 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.832 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.833 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.834 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.835 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.836 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.837 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.838 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.839 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.84 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.840 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.841 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.842 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.843 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.844 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.845 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.846 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.847 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.848 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.849 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.85 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.850 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.851 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.852 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.853 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.854 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.855 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.856 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.857 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.858 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.859 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.86 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.860 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.861 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.862 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.863 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.864 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.865 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.866 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.867 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.868 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.869 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.87 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.870 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.871 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.872 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.873 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.874 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.875 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.876 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.877 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.878 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.879 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.88 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.880 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.881 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.882 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.883 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.884 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.885 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.886 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.887 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.888 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.889 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.89 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.890 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.891 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.892 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.893 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.894 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.895 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.896 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.897 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.898 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.899 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.9 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.90 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.900 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.901 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.902 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.903 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.904 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.905 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.906 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.907 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.908 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.909 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.91 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.910 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.911 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.912 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.913 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.914 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.915 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.916 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.917 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.918 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.919 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.92 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.920 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.921 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.922 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.923 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.924 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.925 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.926 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.927 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.928 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.929 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.93 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.930 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.931 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.932 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.933 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.934 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.935 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.936 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.937 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.938 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.939 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.94 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.940 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.941 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.942 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.943 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.944 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.945 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.946 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.947 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.948 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.949 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.95 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.950 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.951 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.952 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.953 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.954 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.955 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.956 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.957 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.958 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.959 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.96 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.960 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.961 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.962 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.963 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.964 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.965 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.966 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.967 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.968 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.969 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.97 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.970 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.971 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.972 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.973 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.974 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.975 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.976 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.977 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.978 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.979 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.98 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.980 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.981 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.982 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.983 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.984 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.985 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.986 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.987 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.988 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.989 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.99 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.990 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.991 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.992 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.993 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.994 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.995 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.996 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.997 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.998 -rw-r--r-- 1 root root 0 Apr 18 21:04 f123e.sanity.999 mdc.lustre-MDT0000-mdc-ffff8800b3e45000.batch_stats= snapshot_time: 1713488685.712005023 secs.nsecs start_time: 0.000000000 secs.nsecs elapsed_time: 1713488685.712005023 secs.nsecs subreqs per batch batches % cum % 1: 0 0 0 2: 0 0 0 4: 0 0 0 8: 2 20 20 16: 2 20 40 32: 3 30 70 64: 2 20 90 128: 0 0 90 256: 0 0 90 512: 0 0 90 1024: 1 10 100 llite.lustre-ffff8800b3e45000.statahead_agl=1 llite.lustre-ffff8800b3e45000.statahead_batch_max=1024 llite.lustre-ffff8800b3e45000.statahead_max=2048 llite.lustre-ffff8800b3e45000.statahead_min=8 llite.lustre-ffff8800b3e45000.statahead_running_max=16 llite.lustre-ffff8800b3e45000.statahead_timeout=30 llite.lustre-ffff8800b3e45000.statahead_stats= statahead total: 2 statahead wrong: 0 agl total: 2 list_total: 2 fname_total: 0 hit_total: 166 miss_total: 1868 llite.lustre-ffff8800b3e45000.statahead_batch_max=64 llite.lustre-ffff8800b3e45000.statahead_max=128 PASS 123e (53s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 123f: Retry mechanism with large wide striping files ========================================================== 21:05:04 (1713488704) llite.lustre-ffff8800b3e45000.statahead_max=64 llite.lustre-ffff8800b3e45000.statahead_batch_max=64 total 100393 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.0 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.1 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.10 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.100 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.101 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.102 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.103 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.104 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.105 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.106 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.107 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.108 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.109 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.11 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.110 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.111 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.112 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.113 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.114 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.115 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.116 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.117 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.118 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.119 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.12 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.120 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.121 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.122 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.123 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.124 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.125 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.126 -rw-r--r-- 1 root root 0 Apr 18 21:06 f123f.sanity.127 -rw-r--r-- 1 root root 0 Apr 18 21:06 f123f.sanity.128 -rw-r--r-- 1 root root 0 Apr 18 21:06 f123f.sanity.129 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.13 -rw-r--r-- 1 root root 0 Apr 18 21:06 f123f.sanity.130 -rw-r--r-- 1 root root 0 Apr 18 21:06 f123f.sanity.131 -rw-r--r-- 1 root root 0 Apr 18 21:06 f123f.sanity.132 -rw-r--r-- 1 root root 0 Apr 18 21:06 f123f.sanity.133 -rw-r--r-- 1 root root 0 Apr 18 21:06 f123f.sanity.134 -rw-r--r-- 1 root root 0 Apr 18 21:06 f123f.sanity.135 -rw-r--r-- 1 root root 0 Apr 18 21:06 f123f.sanity.136 -rw-r--r-- 1 root root 0 Apr 18 21:06 f123f.sanity.137 -rw-r--r-- 1 root root 0 Apr 18 21:06 f123f.sanity.138 -rw-r--r-- 1 root root 0 Apr 18 21:06 f123f.sanity.139 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.14 -rw-r--r-- 1 root root 0 Apr 18 21:06 f123f.sanity.140 -rw-r--r-- 1 root root 0 Apr 18 21:06 f123f.sanity.141 -rw-r--r-- 1 root root 0 Apr 18 21:06 f123f.sanity.142 -rw-r--r-- 1 root root 0 Apr 18 21:06 f123f.sanity.143 -rw-r--r-- 1 root root 0 Apr 18 21:06 f123f.sanity.144 -rw-r--r-- 1 root root 0 Apr 18 21:06 f123f.sanity.145 -rw-r--r-- 1 root root 0 Apr 18 21:06 f123f.sanity.146 -rw-r--r-- 1 root root 0 Apr 18 21:06 f123f.sanity.147 -rw-r--r-- 1 root root 0 Apr 18 21:06 f123f.sanity.148 -rw-r--r-- 1 root root 0 Apr 18 21:06 f123f.sanity.149 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.15 -rw-r--r-- 1 root root 0 Apr 18 21:06 f123f.sanity.150 -rw-r--r-- 1 root root 0 Apr 18 21:06 f123f.sanity.151 -rw-r--r-- 1 root root 0 Apr 18 21:06 f123f.sanity.152 -rw-r--r-- 1 root root 0 Apr 18 21:06 f123f.sanity.153 -rw-r--r-- 1 root root 0 Apr 18 21:06 f123f.sanity.154 -rw-r--r-- 1 root root 0 Apr 18 21:06 f123f.sanity.155 -rw-r--r-- 1 root root 0 Apr 18 21:06 f123f.sanity.156 -rw-r--r-- 1 root root 0 Apr 18 21:06 f123f.sanity.157 -rw-r--r-- 1 root root 0 Apr 18 21:06 f123f.sanity.158 -rw-r--r-- 1 root root 0 Apr 18 21:06 f123f.sanity.159 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.16 -rw-r--r-- 1 root root 0 Apr 18 21:06 f123f.sanity.160 -rw-r--r-- 1 root root 0 Apr 18 21:06 f123f.sanity.161 -rw-r--r-- 1 root root 0 Apr 18 21:06 f123f.sanity.162 -rw-r--r-- 1 root root 0 Apr 18 21:06 f123f.sanity.163 -rw-r--r-- 1 root root 0 Apr 18 21:06 f123f.sanity.164 -rw-r--r-- 1 root root 0 Apr 18 21:06 f123f.sanity.165 -rw-r--r-- 1 root root 0 Apr 18 21:06 f123f.sanity.166 -rw-r--r-- 1 root root 0 Apr 18 21:06 f123f.sanity.167 -rw-r--r-- 1 root root 0 Apr 18 21:06 f123f.sanity.168 -rw-r--r-- 1 root root 0 Apr 18 21:06 f123f.sanity.169 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.17 -rw-r--r-- 1 root root 0 Apr 18 21:06 f123f.sanity.170 -rw-r--r-- 1 root root 0 Apr 18 21:06 f123f.sanity.171 -rw-r--r-- 1 root root 0 Apr 18 21:06 f123f.sanity.172 -rw-r--r-- 1 root root 0 Apr 18 21:06 f123f.sanity.173 -rw-r--r-- 1 root root 0 Apr 18 21:06 f123f.sanity.174 -rw-r--r-- 1 root root 0 Apr 18 21:06 f123f.sanity.175 -rw-r--r-- 1 root root 0 Apr 18 21:06 f123f.sanity.176 -rw-r--r-- 1 root root 0 Apr 18 21:06 f123f.sanity.177 -rw-r--r-- 1 root root 0 Apr 18 21:06 f123f.sanity.178 -rw-r--r-- 1 root root 0 Apr 18 21:06 f123f.sanity.179 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.18 -rw-r--r-- 1 root root 0 Apr 18 21:06 f123f.sanity.180 -rw-r--r-- 1 root root 0 Apr 18 21:06 f123f.sanity.181 -rw-r--r-- 1 root root 0 Apr 18 21:06 f123f.sanity.182 -rw-r--r-- 1 root root 0 Apr 18 21:06 f123f.sanity.183 -rw-r--r-- 1 root root 0 Apr 18 21:06 f123f.sanity.184 -rw-r--r-- 1 root root 0 Apr 18 21:06 f123f.sanity.185 -rw-r--r-- 1 root root 0 Apr 18 21:06 f123f.sanity.186 -rw-r--r-- 1 root root 0 Apr 18 21:06 f123f.sanity.187 -rw-r--r-- 1 root root 0 Apr 18 21:06 f123f.sanity.188 -rw-r--r-- 1 root root 0 Apr 18 21:06 f123f.sanity.189 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.19 -rw-r--r-- 1 root root 0 Apr 18 21:06 f123f.sanity.190 -rw-r--r-- 1 root root 0 Apr 18 21:06 f123f.sanity.191 -rw-r--r-- 1 root root 0 Apr 18 21:06 f123f.sanity.192 -rw-r--r-- 1 root root 0 Apr 18 21:06 f123f.sanity.193 -rw-r--r-- 1 root root 0 Apr 18 21:06 f123f.sanity.194 -rw-r--r-- 1 root root 0 Apr 18 21:06 f123f.sanity.195 -rw-r--r-- 1 root root 0 Apr 18 21:06 f123f.sanity.196 -rw-r--r-- 1 root root 0 Apr 18 21:06 f123f.sanity.197 -rw-r--r-- 1 root root 0 Apr 18 21:06 f123f.sanity.198 -rw-r--r-- 1 root root 0 Apr 18 21:06 f123f.sanity.199 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.2 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.20 -rw-r--r-- 1 root root 0 Apr 18 21:06 f123f.sanity.200 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.21 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.22 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.23 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.24 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.25 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.26 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.27 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.28 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.29 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.3 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.30 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.31 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.32 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.33 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.34 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.35 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.36 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.37 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.38 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.39 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.4 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.40 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.41 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.42 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.43 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.44 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.45 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.46 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.47 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.48 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.49 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.5 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.50 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.51 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.52 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.53 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.54 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.55 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.56 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.57 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.58 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.59 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.6 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.60 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.61 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.62 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.63 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.64 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.65 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.66 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.67 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.68 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.69 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.7 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.70 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.71 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.72 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.73 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.74 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.75 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.76 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.77 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.78 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.79 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.8 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.80 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.81 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.82 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.83 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.84 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.85 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.86 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.87 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.88 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.89 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.9 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.90 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.91 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.92 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.93 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.94 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.95 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.96 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.97 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.98 -rw-r--r-- 1 root root 0 Apr 18 21:05 f123f.sanity.99 mdc.lustre-MDT0000-mdc-ffff8800b3e45000.batch_stats= snapshot_time: 1713488852.830856801 secs.nsecs start_time: 0.000000000 secs.nsecs elapsed_time: 1713488852.830856801 secs.nsecs subreqs per batch batches % cum % 1: 11 44 44 2: 0 0 44 4: 0 0 44 8: 4 16 60 16: 3 12 72 32: 3 12 84 64: 3 12 96 128: 0 0 96 256: 0 0 96 512: 0 0 96 1024: 1 4 100 llite.lustre-ffff8800b3e45000.statahead_agl=1 llite.lustre-ffff8800b3e45000.statahead_batch_max=64 llite.lustre-ffff8800b3e45000.statahead_max=64 llite.lustre-ffff8800b3e45000.statahead_min=8 llite.lustre-ffff8800b3e45000.statahead_running_max=16 llite.lustre-ffff8800b3e45000.statahead_timeout=30 llite.lustre-ffff8800b3e45000.statahead_stats= statahead total: 3 statahead wrong: 1 agl total: 3 list_total: 3 fname_total: 0 hit_total: 183 miss_total: 1874 llite.lustre-ffff8800b3e45000.statahead_max=128 llite.lustre-ffff8800b3e45000.statahead_batch_max=64 PASS 123f (203s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 123g: Test for stat-ahead advise ========== 21:08:29 (1713488909) total: 1000 open/close in 2.25 seconds: 445.16 ops/second llite.lustre-ffff8800b3e45000.statahead_stats=clear mdc.lustre-MDT0000-mdc-ffff8800b3e45000.batch_stats=clear statahead total: 1 statahead wrong: 0 agl total: 1 list_total: 0 fname_total: 0 hit_total: 1000 miss_total: 0 snapshot_time: 1713488922.179353502 secs.nsecs start_time: 1713488921.025156758 secs.nsecs elapsed_time: 1.154196744 secs.nsecs subreqs per batch batches % cum % 1: 2 8 8 2: 0 0 8 4: 2 8 17 8: 3 13 30 16: 1 4 34 32: 0 0 34 64: 15 65 100 Hit total: 1000 PASS 123g (14s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 123h: Verify statahead work with the fname pattern via du ========================================================== 21:08:45 (1713488925) llite.lustre-ffff8800b3e45000.enable_statahead_fname=1 Scan a directory with number regularized fname llite.lustre-ffff8800b3e45000.statahead_stats=clear mdc.lustre-MDT0000-mdc-ffff8800b3e45000.batch_stats=0 llite.lustre-ffff8800b3e45000.statahead_max=1024 llite.lustre-ffff8800b3e45000.statahead_batch_max=1024 statahead total: 0 statahead wrong: 0 agl total: 0 list_total: 0 fname_total: 0 hit_total: 0 miss_total: 0 Wait statahead thread (ll_sa_xxx) to exit... Waiting 35s for '' statahead total: 1 statahead wrong: 0 agl total: 1 list_total: 0 fname_total: 1 hit_total: 9993 miss_total: 0 snapshot_time: 1713489066.721620444 secs.nsecs start_time: 1713489052.721410081 secs.nsecs elapsed_time: 14.000210363 secs.nsecs subreqs per batch batches % cum % 1: 1 7 7 2: 0 0 7 4: 0 0 7 8: 1 7 15 16: 1 7 23 32: 0 0 23 64: 0 0 23 128: 0 0 23 256: 0 0 23 512: 0 0 23 1024: 10 76 100 Scan a directory with zeroed padding number regularized fname llite.lustre-ffff8800b3e45000.statahead_stats=clear mdc.lustre-MDT0000-mdc-ffff8800b3e45000.batch_stats=0 llite.lustre-ffff8800b3e45000.statahead_max=1024 llite.lustre-ffff8800b3e45000.statahead_batch_max=1024 statahead total: 0 statahead wrong: 0 agl total: 0 list_total: 0 fname_total: 0 hit_total: 0 miss_total: 0 Wait statahead thread (ll_sa_xxx) to exit... Waiting 35s for '' statahead total: 1 statahead wrong: 0 agl total: 1 list_total: 0 fname_total: 1 hit_total: 9993 miss_total: 0 snapshot_time: 1713489271.472403208 secs.nsecs start_time: 1713489257.498039061 secs.nsecs elapsed_time: 13.974364147 secs.nsecs subreqs per batch batches % cum % 1: 1 7 7 2: 0 0 7 4: 0 0 7 8: 1 7 15 16: 1 7 23 32: 0 0 23 64: 0 0 23 128: 0 0 23 256: 0 0 23 512: 0 0 23 1024: 10 76 100 llite.lustre-ffff8800b3e45000.enable_statahead_fname=0 llite.lustre-ffff8800b3e45000.statahead_batch_max=64 llite.lustre-ffff8800b3e45000.statahead_max=128 PASS 123h (397s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 123i: Verify statahead work with the fname indexing pattern ========================================================== 21:15:24 (1713489324) llite.lustre-ffff8800b3e45000.statahead_max=1024 llite.lustre-ffff8800b3e45000.statahead_batch_max=32 llite.lustre-ffff8800b3e45000.statahead_min=64 llite.lustre-ffff8800b3e45000.enable_statahead_fname=1 Command: - createmany -m /mnt/lustre/d123i.sanity/f123i.sanity.%06d 1000 - ls /mnt/lustre/d123i.sanity/* > /dev/null total: 1000 create in 1.03 seconds: 970.42 ops/second llite.lustre-ffff8800b3e45000.statahead_stats=clear mdc.lustre-MDT0000-mdc-ffff8800b3e45000.batch_stats=0 statahead_stats (Pre): statahead total: 0 statahead wrong: 0 agl total: 0 list_total: 0 fname_total: 0 hit_total: 0 miss_total: 0 statahead_stats (Post): statahead total: 1 statahead wrong: 0 agl total: 1 list_total: 0 fname_total: 1 hit_total: 0 miss_total: 0 snapshot_time: 1713489328.135958114 secs.nsecs start_time: 1713489327.158971677 secs.nsecs elapsed_time: 0.976986437 secs.nsecs subreqs per batch batches % cum % 1: 1 1 1 2: 3 4 6 4: 3 4 10 8: 1 1 12 16: 1 1 13 32: 57 86 100 Wait the statahead thread (ll_sa_xxx) to exit ... Waiting 35s for '' Waiting 25s for '' statahead total: 1 statahead wrong: 0 agl total: 1 list_total: 0 fname_total: 1 hit_total: 994 miss_total: 1 snapshot_time: 1713489358.400756163 secs.nsecs start_time: 1713489327.158971677 secs.nsecs elapsed_time: 31.241784486 secs.nsecs subreqs per batch batches % cum % 1: 2 2 2 2: 3 4 6 4: 3 4 11 8: 1 1 12 16: 1 1 13 32: 62 86 100 Command: - createmany -m /mnt/lustre/d123i.sanity/f123i.sanity 1000 - aheadmany -c stat -N -s 0 -e 1000 -b f123i.sanity -d /mnt/lustre/d123i.sanity total: 1000 create in 0.89 seconds: 1127.09 ops/second llite.lustre-ffff8800b3e45000.statahead_stats=clear mdc.lustre-MDT0000-mdc-ffff8800b3e45000.batch_stats=0 statahead_stats (Pre): statahead total: 0 statahead wrong: 0 agl total: 0 list_total: 0 fname_total: 0 hit_total: 0 miss_total: 0 statahead_stats (Post): statahead total: 1 statahead wrong: 0 agl total: 1 list_total: 0 fname_total: 1 hit_total: 0 miss_total: 0 snapshot_time: 1713489361.211508315 secs.nsecs start_time: 1713489360.465971524 secs.nsecs elapsed_time: 0.745536791 secs.nsecs subreqs per batch batches % cum % 1: 23 25 25 2: 0 0 25 4: 0 0 25 8: 5 5 31 16: 1 1 32 32: 61 67 100 Wait the statahead thread (ll_sa_xxx) to exit ... Waiting 35s for '' Waiting 25s for '' Waiting 15s for '' statahead total: 1 statahead wrong: 0 agl total: 1 list_total: 0 fname_total: 1 hit_total: 995 miss_total: 0 snapshot_time: 1713489391.496410366 secs.nsecs start_time: 1713489360.465971524 secs.nsecs elapsed_time: 31.030438842 secs.nsecs subreqs per batch batches % cum % 1: 24 26 26 2: 0 0 26 4: 0 0 26 8: 5 5 31 16: 1 1 32 32: 61 67 100 - unlinked 0 (time 1713489394 ; total 1 ; last 1) total: 1000 unlinks in 2 seconds: 500.000000 unlinks/second - unlinked 0 (time 1713489396 ; total 0 ; last 0) total: 1000 unlinks in 2 seconds: 500.000000 unlinks/second llite.lustre-ffff8800b3e45000.enable_statahead_fname=0 llite.lustre-ffff8800b3e45000.statahead_min=8 llite.lustre-ffff8800b3e45000.statahead_batch_max=64 llite.lustre-ffff8800b3e45000.statahead_max=128 PASS 123i (74s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 124a: lru resize ================================================================================================= 21:16:41 (1713489401) create 2000 files at /mnt/lustre/d124a.sanity total: 2000 open/close in 8.11 seconds: 246.52 ops/second NSDIR=ldlm.namespaces.lustre-MDT0000-mdc-ffff8800b3e45000 NS=ldlm.namespaces.lustre-MDT0000-mdc-ffff8800b3e45000 LRU=2003 LIMIT=62166 LVF=3724300 OLD_LVF=100 Sleep 50 sec ...2003...2003...2003...2003...2003...1938...1938...1465...1465...1178 Dropped 825 locks in 50s unlink 2000 files at /mnt/lustre/d124a.sanity - unlinked 0 (time 1713489469 ; total 0 ; last 0) total: 2000 unlinks in 3 seconds: 666.666687 unlinks/second PASS 124a (74s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 124b: lru resize (performance test) ================================================================================= 21:17:57 (1713489477) ldlm.namespaces.lustre-MDT0000-mdc-ffff8800b3e45000.lru_size=400 - open/close 2684 (time 1713489488.64 total 10.00 last 268.40) - open/close 6308 (time 1713489498.64 total 20.00 last 362.38) total: 8000 open/close in 23.97 seconds: 333.78 ops/second doing ls -la /mnt/lustre/d124b.sanity/disable_lru_resize 3 times ls -la time: 80 seconds lru_size = 400 - unlinked 0 (time 1713489585 ; total 0 ; last 0) total: 8000 unlinks in 28 seconds: 285.714294 unlinks/second ldlm.namespaces.lustre-MDT0000-mdc-ffff8800b3e45000.lru_size=0 - open/close 2872 (time 1713489624.79 total 10.00 last 287.16) - open/close 5994 (time 1713489634.79 total 20.00 last 312.13) total: 8000 open/close in 26.18 seconds: 305.57 ops/second doing ls -la /mnt/lustre/d124b.sanity/enable_lru_resize 3 times ls -la time: 10 seconds lru_size = 8005 ls -la is 87% faster with lru resize enabled - unlinked 0 (time 1713489654 ; total 0 ; last 0) total: 8000 unlinks in 14 seconds: 571.428589 unlinks/second PASS 124b (193s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 124c: LRUR cancel very aged locks ========= 21:21:12 (1713489672) total: 100 open/close in 0.96 seconds: 104.62 ops/second unused=104, max_age=3900000, recalc_p=10 ldlm.namespaces.lustre-MDT0000-mdc-ffff8800b3e45000.lru_max_age=1000 sleep 20 seconds... - unlinked 0 (time 1713489694 ; total 0 ; last 0) total: 100 unlinks in 1 seconds: 100.000000 unlinks/second PASS 124c (24s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 124d: cancel very aged locks if lru-resize disabled ========================================================== 21:21:38 (1713489698) ldlm.namespaces.lustre-MDT0000-mdc-ffff8800b3e45000.lru_size=400 total: 100 open/close in 0.89 seconds: 111.91 ops/second unused=104, max_age=3900000, recalc_p=10 ldlm.namespaces.lustre-MDT0000-mdc-ffff8800b3e45000.lru_max_age=1000 sleep 20 seconds... - unlinked 0 (time 1713489721 ; total 0 ; last 0) total: 100 unlinks in 1 seconds: 100.000000 unlinks/second ldlm.namespaces.lustre-MDT0000-mdc-ffff8800b3e45000.lru_size=0 PASS 124d (24s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 125: don't return EPROTO when a dir has a non-default striping and ACLs ========================================================== 21:22:04 (1713489724) uid=500(sanityusr) gid=500(sanityusr) groups=500(sanityusr) drwxrwxr-x+ 2 root root 11776 Apr 18 21:22 /mnt/lustre/d125.sanity PASS 125 (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 126: check that the fsgid provided by the client is taken into account ========================================================== 21:22:08 (1713489728) running as uid/gid/euid/egid 0/1/0/1, groups: [touch] [/mnt/lustre/f126.sanity] PASS 126 (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 127a: verify the client stats are sane ==== 21:22:11 (1713489731) enable_stats_header=1 stats before reset osc.lustre-OST0000-osc-ffff8800b3e45000.stats= snapshot_time 1713489732.362441038 secs.nsecs start_time 1713488644.031434151 secs.nsecs elapsed_time 1088.331006887 secs.nsecs req_waittime 300512 samples [usecs] 341 14672592 13660274111 16591513293581111 req_active 300512 samples [reqs] 1 1395 38842497 11975818757 ldlm_glimpse_enqueue 136539 samples [reqs] 1 1 136539 136539 ost_setattr 126724 samples [usecs] 827 353758 5625816581 440239180276577 ost_connect 1 samples [usecs] 1706 1706 1706 2910436 ost_statfs 1 samples [usecs] 2346 2346 2346 5503716 ldlm_cancel 37155 samples [usecs] 341 14672592 1675195334 15394258832570456 obd_ping 92 samples [usecs] 604 7932 191021 490708383 osc.lustre-OST0001-osc-ffff8800b3e45000.stats= snapshot_time 1713489732.362581743 secs.nsecs start_time 1713488644.032682594 secs.nsecs elapsed_time 1088.329899149 secs.nsecs req_waittime 299318 samples [usecs] 429 15709239 14363951669 28846755243333399 req_active 299318 samples [reqs] 1 1350 38297073 11531971107 ldlm_glimpse_enqueue 135889 samples [reqs] 1 1 135889 135889 ost_setattr 126096 samples [usecs] 849 352831 5460634331 422502172871555 ost_connect 1 samples [usecs] 1393 1393 1393 1940449 ost_statfs 1 samples [usecs] 2106 2106 2106 4435236 ldlm_cancel 37234 samples [usecs] 429 15709239 2515714504 27666691724140166 obd_ping 97 samples [usecs] 605 43535 257415 2594038595 osc.lustre-OST0000-osc-ffff8800b3e45000.stats=0 osc.lustre-OST0001-osc-ffff8800b3e45000.stats=0 1+0 records in 1+0 records out 2097152 bytes (2.1 MB) copied, 0.0740995 s, 28.3 MB/s 1+0 records in 1+0 records out 2097152 bytes (2.1 MB) copied, 0.0958145 s, 21.9 MB/s got name=req_waittime count=9 unit=[usecs] min=1483 max=34941 got name=req_active count=9 unit=[reqs] min=1 max=2 got name=ldlm_extent_enqueue count=2 unit=[reqs] min=1 max=1 got name=read_bytes count=2 unit=[bytes] min=1048576 max=1048576 got name=write_bytes count=2 unit=[bytes] min=1048576 max=1048576 got name=ost_read count=2 unit=[usecs] min=5362 max=6827 got name=ost_write count=2 unit=[usecs] min=5781 max=15015 got name=ost_punch count=1 unit=[usecs] min=2189 max=2189 got name=ldlm_cancel count=2 unit=[usecs] min=11631 max=34941 PASS 127a (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 127b: verify the llite client stats are sane ========================================================== 21:22:15 (1713489735) stats before reset llite.lustre-ffff8800b3e45000.stats= snapshot_time 1713489735.992762085 secs.nsecs start_time 1713488644.015821017 secs.nsecs elapsed_time 1091.976941068 secs.nsecs read_bytes 1 samples [bytes] 2097152 2097152 2097152 4398046511104 write_bytes 1 samples [bytes] 2097152 2097152 2097152 4398046511104 read 1 samples [usecs] 90755 90755 90755 8236470025 write 1 samples [usecs] 68226 68226 68226 4654787076 ioctl 73 samples [reqs] open 40436 samples [usecs] 2 5536 1230756 286007226 close 40436 samples [usecs] 18 72300 67314179 173223040115 seek 1 samples [usecs] 10 10 10 100 readdir 89 samples [usecs] 3 177380 2021440 180424304482 setattr 21205 samples [usecs] 4177 509511 199601810 15638756167482 truncate 1 samples [usecs] 7281 7281 7281 53012961 getattr 75563 samples [usecs] 76 734973 110785527 18923519766823 create 2000 samples [usecs] 589 38567 1851075 3240036363 unlink 41405 samples [usecs] 591 492218 165247501 5842954033545 mkdir 13 samples [usecs] 2834 35630 95362 1585438646 rmdir 4 samples [usecs] 5630 19634 49356 709533136 mknod 42406 samples [usecs] 588 332233 144225565 4142034551393 statfs 4 samples [usecs] 1645 2834 9063 21489109 setxattr 1 samples [usecs] 22441 22441 22441 503598481 getxattr 124 samples [usecs] 10 6626 234331 513107963 getxattr_hits 10 samples [reqs] inode_permission 785985 samples [usecs] 0 103266 24738427 30323645313 opencount 40437 samples [reqs] 1 4 40456 40510 openclosetime 13 samples [usecs] 1681 83715059 110504253 7484949158406921 llite.lustre-ffff8800b3e45000.stats=0 1+0 records in 1+0 records out 4096 bytes (4.1 kB) copied, 0.00407512 s, 1.0 MB/s 1+0 records in 1+0 records out 4096 bytes (4.1 kB) copied, 0.000850867 s, 4.8 MB/s 1+0 records in 1+0 records out 4096 bytes (4.1 kB) copied, 0.0046723 s, 877 kB/s 1+0 records in 1+0 records out 4096 bytes (4.1 kB) copied, 0.00032695 s, 12.5 MB/s got name=read_bytes count=2 unit=[bytes] min=4096 max=4096 got name=write_bytes count=2 unit=[bytes] min=4096 max=4096 got name=read count=2 unit=[usecs] min=83 max=4221 got name=write count=2 unit=[usecs] min=511 max=2172 got name=open count=4 unit=[usecs] min=19 max=4302 got name=close count=4 unit=[usecs] min=34 max=1662 got name=seek count=2 unit=[usecs] min=10 max=15 got name=truncate count=1 unit=[usecs] min=5949 max=5949 got name=mknod count=1 unit=[usecs] min=4290 max=4290 got name=inode_permission count=9 unit=[usecs] min=2 max=2231 got name=opencount count=4 unit=[reqs] min=1 max=4 got name=openclosetime count=3 unit=[usecs] min=2680 max=34752 PASS 127b (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 127c: test llite extent stats with regular & mmap i/o ========================================================== 21:22:19 (1713489739) llite.lustre-ffff8800b3e45000.extents_stats=1 1+0 records in 1+0 records out 3072 bytes (3.1 kB) copied, 0.00294417 s, 1.0 MB/s 1+0 records in 1+0 records out 3072 bytes (3.1 kB) copied, 0.00492846 s, 623 kB/s 1+0 records in 1+0 records out 3072 bytes (3.1 kB) copied, 0.00202661 s, 1.5 MB/s 1+0 records in 1+0 records out 3072 bytes (3.1 kB) copied, 0.000443028 s, 6.9 MB/s 1+0 records in 1+0 records out 6144 bytes (6.1 kB) copied, 0.00505113 s, 1.2 MB/s 1+0 records in 1+0 records out 6144 bytes (6.1 kB) copied, 30.0876 s, 0.2 kB/s 1+0 records in 1+0 records out 6144 bytes (6.1 kB) copied, 0.000841219 s, 7.3 MB/s 1+0 records in 1+0 records out 6144 bytes (6.1 kB) copied, 0.000807174 s, 7.6 MB/s 1+0 records in 1+0 records out 12288 bytes (12 kB) copied, 0.0115225 s, 1.1 MB/s 1+0 records in 1+0 records out 12288 bytes (12 kB) copied, 0.00225366 s, 5.5 MB/s 1+0 records in 1+0 records out 12288 bytes (12 kB) copied, 0.000657599 s, 18.7 MB/s 1+0 records in 1+0 records out 12288 bytes (12 kB) copied, 0.0005159 s, 23.8 MB/s 1+0 records in 1+0 records out 24576 bytes (25 kB) copied, 0.0093666 s, 2.6 MB/s 1+0 records in 1+0 records out 24576 bytes (25 kB) copied, 34.8637 s, 0.7 kB/s 1+0 records in 1+0 records out 24576 bytes (25 kB) copied, 0.0005062 s, 48.5 MB/s 1+0 records in 1+0 records out 24576 bytes (25 kB) copied, 0.000418172 s, 58.8 MB/s 1+0 records in 1+0 records out 49152 bytes (49 kB) copied, 0.00797109 s, 6.2 MB/s 1+0 records in 1+0 records out 49152 bytes (49 kB) copied, 34.982 s, 1.4 kB/s 1+0 records in 1+0 records out 49152 bytes (49 kB) copied, 0.000472502 s, 104 MB/s 1+0 records in 1+0 records out 49152 bytes (49 kB) copied, 0.000343942 s, 143 MB/s 1+0 records in 1+0 records out 98304 bytes (98 kB) copied, 0.00680311 s, 14.4 MB/s 1+0 records in 1+0 records out 98304 bytes (98 kB) copied, 29.9972 s, 3.3 kB/s 1+0 records in 1+0 records out 98304 bytes (98 kB) copied, 0.000945143 s, 104 MB/s 1+0 records in 1+0 records out 98304 bytes (98 kB) copied, 0.000494717 s, 199 MB/s 1+0 records in 1+0 records out 196608 bytes (197 kB) copied, 0.0116607 s, 16.9 MB/s 1+0 records in 1+0 records out 196608 bytes (197 kB) copied, 0.0138761 s, 14.2 MB/s 1+0 records in 1+0 records out 196608 bytes (197 kB) copied, 0.000765704 s, 257 MB/s 1+0 records in 1+0 records out 196608 bytes (197 kB) copied, 0.00055523 s, 354 MB/s 1+0 records in 1+0 records out 393216 bytes (393 kB) copied, 0.0152978 s, 25.7 MB/s 1+0 records in 1+0 records out 393216 bytes (393 kB) copied, 0.023586 s, 16.7 MB/s 1+0 records in 1+0 records out 393216 bytes (393 kB) copied, 0.00105283 s, 373 MB/s 1+0 records in 1+0 records out 393216 bytes (393 kB) copied, 0.000793306 s, 496 MB/s 1+0 records in 1+0 records out 786432 bytes (786 kB) copied, 0.0275232 s, 28.6 MB/s 1+0 records in 1+0 records out 786432 bytes (786 kB) copied, 0.036665 s, 21.4 MB/s 1+0 records in 1+0 records out 786432 bytes (786 kB) copied, 0.00200234 s, 393 MB/s 1+0 records in 1+0 records out 786432 bytes (786 kB) copied, 0.00148726 s, 529 MB/s 1+0 records in 1+0 records out 1572864 bytes (1.6 MB) copied, 0.083162 s, 18.9 MB/s 1+0 records in 1+0 records out 1572864 bytes (1.6 MB) copied, 0.0530768 s, 29.6 MB/s 1+0 records in 1+0 records out 1572864 bytes (1.6 MB) copied, 0.00275141 s, 572 MB/s 1+0 records in 1+0 records out 1572864 bytes (1.6 MB) copied, 0.00280999 s, 560 MB/s 1+0 records in 1+0 records out 3145728 bytes (3.1 MB) copied, 0.114608 s, 27.4 MB/s 1+0 records in 1+0 records out 3145728 bytes (3.1 MB) copied, 0.0795162 s, 39.6 MB/s 1+0 records in 1+0 records out 3145728 bytes (3.1 MB) copied, 0.0043675 s, 720 MB/s 1+0 records in 1+0 records out 3145728 bytes (3.1 MB) copied, 0.00305363 s, 1.0 GB/s 1+0 records in 1+0 records out 6291456 bytes (6.3 MB) copied, 0.16359 s, 38.5 MB/s 1+0 records in 1+0 records out 6291456 bytes (6.3 MB) copied, 0.134785 s, 46.7 MB/s 1+0 records in 1+0 records out 6291456 bytes (6.3 MB) copied, 0.00639228 s, 984 MB/s 1+0 records in 1+0 records out 6291456 bytes (6.3 MB) copied, 0.00679414 s, 926 MB/s 1+0 records in 1+0 records out 12582912 bytes (13 MB) copied, 0.292824 s, 43.0 MB/s 1+0 records in 1+0 records out 12582912 bytes (13 MB) copied, 0.288519 s, 43.6 MB/s 1+0 records in 1+0 records out 12582912 bytes (13 MB) copied, 0.0100381 s, 1.3 GB/s 1+0 records in 1+0 records out 12582912 bytes (13 MB) copied, 0.00976186 s, 1.3 GB/s 1+0 records in 1+0 records out 25165824 bytes (25 MB) copied, 0.608033 s, 41.4 MB/s 1+0 records in 1+0 records out 25165824 bytes (25 MB) copied, 0.60615 s, 41.5 MB/s 1+0 records in 1+0 records out 25165824 bytes (25 MB) copied, 0.0171405 s, 1.5 GB/s 1+0 records in 1+0 records out 25165824 bytes (25 MB) copied, 0.0151009 s, 1.7 GB/s 1+0 records in 1+0 records out 50331648 bytes (50 MB) copied, 1.18738 s, 42.4 MB/s 1+0 records in 1+0 records out 50331648 bytes (50 MB) copied, 1.09099 s, 46.1 MB/s 1+0 records in 1+0 records out 50331648 bytes (50 MB) copied, 0.0290732 s, 1.7 GB/s 1+0 records in 1+0 records out 50331648 bytes (50 MB) copied, 0.0299412 s, 1.7 GB/s llite.lustre-ffff8800b3e45000.extents_stats= snapshot_time: 1713489876.336937470 secs.nsecs start_time: 1713489739.407670712 secs.nsecs elapsed_time: 136.929266758 secs.nsecs read | write extents calls % cum% | calls % cum% 0K - 4K : 2 6 6 | 2 6 6 4K - 8K : 2 6 13 | 2 6 13 8K - 16K : 2 6 20 | 2 6 20 16K - 32K : 2 6 26 | 2 6 26 32K - 64K : 2 6 33 | 2 6 33 64K - 128K : 2 6 40 | 2 6 40 128K - 256K : 2 6 46 | 2 6 46 256K - 512K : 2 6 53 | 2 6 53 512K - 1024K : 2 6 60 | 2 6 60 1M - 2M : 2 6 66 | 2 6 66 2M - 4M : 2 6 73 | 2 6 73 4M - 8M : 2 6 80 | 2 6 80 8M - 16M : 2 6 86 | 2 6 86 16M - 32M : 2 6 93 | 2 6 93 32M - 64M : 2 6 100 | 2 6 100 llite.lustre-ffff8800b3e45000.extents_stats=c 1+0 records in 1+0 records out 524288 bytes (524 kB) copied, 0.0381774 s, 13.7 MB/s llite.lustre-ffff8800b3e45000.extents_stats= snapshot_time: 1713489876.729230077 secs.nsecs start_time: 1713489876.596541096 secs.nsecs elapsed_time: 0.132688981 secs.nsecs read | write extents calls % cum% | calls % cum% 0K - 4K : 0 0 0 | 0 0 0 4K - 8K : 256 100 100 | 128 99 99 8K - 16K : 0 0 100 | 0 0 99 16K - 32K : 0 0 100 | 0 0 99 32K - 64K : 0 0 100 | 0 0 99 64K - 128K : 0 0 100 | 0 0 99 128K - 256K : 0 0 100 | 0 0 99 256K - 512K : 0 0 100 | 0 0 99 512K - 1024K : 0 0 100 | 1 0 100 llite.lustre-ffff8800b3e45000.extents_stats=0 PASS 127c (140s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 128: interactive lfs for 2 consecutive find's ========================================================== 21:24:40 (1713489880) lfs: failed for 'find': No such file or directory /mnt/lustre/f128.sanity PASS 128 (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 129: test directory size limit ================================================================================== 21:24:44 (1713489884) SKIP: sanity test_129 ldiskfs only test SKIP 129 (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 130a: FIEMAP (1-stripe file) ============== 21:24:47 (1713489887) 1+0 records in 1+0 records out 65536 bytes (66 kB) copied, 0.0035955 s, 18.2 MB/s /mnt/lustre/f130a.sanity: FIBMAP unsupported Filesystem type is: bd00bd0 File size of /mnt/lustre/f130a.sanity is 65536 (128 block of 1024 bytes) SKIP: sanity test_130a LU-1941: FIEMAP unimplemented on ZFS SKIP 130a (1s) debug_raw_pointers=0 debug_raw_pointers=0 SKIP: sanity test_130b skipping ALWAYS excluded test 130b SKIP: sanity test_130c skipping ALWAYS excluded test 130c SKIP: sanity test_130d skipping ALWAYS excluded test 130d SKIP: sanity test_130e skipping ALWAYS excluded test 130e SKIP: sanity test_130f skipping ALWAYS excluded test 130f SKIP: sanity test_130g skipping ALWAYS excluded test 130g debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 131a: test iov's crossing stripe boundary for writev/readv ========================================================== 21:24:52 (1713489892) PASS 131a (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 131b: test append writev ================== 21:24:55 (1713489895) /mnt/lustre/f131b.sanity has type file OK /mnt/lustre/f131b.sanity has size 3145728 OK /mnt/lustre/f131b.sanity has type file OK /mnt/lustre/f131b.sanity has size 5767168 OK PASS 131b (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 131c: test read/write on file w/o objects ========================================================== 21:24:59 (1713489899) Write error: Bad file descriptor (rc = -1, len = 1048576) PASS 131c (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 131d: test short read ===================== 21:25:02 (1713489902) PASS 131d (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 131e: test read hitting hole ============== 21:25:05 (1713489905) PASS 131e (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 133a: Verifying MDT stats ================================================================================================== 21:25:08 (1713489908) mdt.lustre-MDT0000.rename_stats mdt.lustre-MDT0000.md_stats=clear obdfilter.lustre-OST0000.stats=clear obdfilter.lustre-OST0001.stats=clear mdt.lustre-MDT0000.md_stats=clear /mnt/lustre/d133a.sanity/stats_testdir: total 1 -rw-r--r-- 1 root root 0 Apr 18 21:25 f133a.sanity PASS 133a (6s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 133b: Verifying extra MDT stats ============================================================================================ 21:25:15 (1713489915) mdt.lustre-MDT0000.md_stats=clear obdfilter.lustre-OST0000.stats=clear obdfilter.lustre-OST0001.stats=clear mdt.lustre-MDT0000.md_stats=clear mdt.lustre-MDT0000.md_stats=clear obdfilter.lustre-OST0000.exports.0@lo.stats=clear obdfilter.lustre-OST0000.exports.192.168.202.16@tcp.stats=clear obdfilter.lustre-OST0001.exports.0@lo.stats=clear obdfilter.lustre-OST0001.exports.192.168.202.16@tcp.stats=clear UUID 1K-blocks Used Available Use% Mounted on lustre-MDT0000_UUID 2210560 12032 2196480 1% /mnt/lustre[MDT:0] lustre-OST0000_UUID 3771392 12288 3755008 1% /mnt/lustre[OST:0] lustre-OST0001_UUID 3771392 9216 3760128 1% /mnt/lustre[OST:1] filesystem_summary: 7542784 21504 7515136 1% /mnt/lustre mdt.lustre-MDT0000.md_stats=clear obdfilter.lustre-OST0000.exports.0@lo.stats=clear obdfilter.lustre-OST0000.exports.192.168.202.16@tcp.stats=clear obdfilter.lustre-OST0001.exports.0@lo.stats=clear obdfilter.lustre-OST0001.exports.192.168.202.16@tcp.stats=clear Filesystem 1K-blocks Used Available Use% Mounted on 192.168.202.116@tcp:/lustre 7542784 21504 7515136 1% /mnt/lustre PASS 133b (10s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 133c: Verifying OST stats ================================================================================================== 21:25:26 (1713489926) sleep 5 for ZFS zfs Waiting for MDT destroys to complete mdt.lustre-MDT0000.md_stats=clear obdfilter.lustre-OST0000.stats=clear obdfilter.lustre-OST0001.stats=clear 1+0 records in 1+0 records out 524288 bytes (524 kB) copied, 0.0292553 s, 17.9 MB/s 1+0 records in 1+0 records out 1024 bytes (1.0 kB) copied, 0.0104014 s, 98.4 kB/s sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 133c (30s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 133d: Verifying rename_stats ================================================================================================== 21:25:57 (1713489957) mdt.lustre-MDT0000.rename_stats mdt.lustre-MDT0000.rename_stats=clear total: 512 open/close in 1.98 seconds: 258.43 ops/second source rename dir size: 16K target rename dir size: 16K mdt.lustre-MDT0000.rename_stats= rename_stats: - snapshot_time: 1713489963.652863551 - start_time: 1713489959.738213027 - elapsed_time: 3.914650524 - same_dir: 16KB: { sample: 1, pct: 100, cum_pct: 100 } Check same dir rename stats success mdt.lustre-MDT0000.rename_stats=clear source rename dir size: 16K target rename dir size: 16K mdt.lustre-MDT0000.rename_stats= rename_stats: - snapshot_time: 1713489965.079435155 - start_time: 1713489964.657342833 - elapsed_time: 0.422092322 - crossdir_src: 16KB: { sample: 1, pct: 100, cum_pct: 100 } - crossdir_tgt: 16KB: { sample: 1, pct: 100, cum_pct: 100 } Check cross dir rename stats success PASS 133d (12s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 133e: Verifying OST {read,write}_bytes nid stats =========================================================================== 21:26:11 (1713489971) 42+0 records in 42+0 records out 1376256 bytes (1.4 MB) copied, 0.142665 s, 9.6 MB/s 42+0 records in 42+0 records out 1376256 bytes (1.4 MB) copied, 0.0470048 s, 29.3 MB/s PASS 133e (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 133f: Check reads/writes of client lustre proc files with bad area io ========================================================== 21:26:17 (1713489977) cln..Stopping clients: oleg216-client.virtnet /mnt/lustre (opts:) Stopping client oleg216-client.virtnet /mnt/lustre opts: Stopping clients: oleg216-client.virtnet /mnt/lustre2 (opts:) Stopping /mnt/lustre-mds1 (opts:-f) on oleg216-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg216-server Stopping /mnt/lustre-ost2 (opts:-f) on oleg216-server unloading modules on: 'oleg216-server' oleg216-server: oleg216-server.virtnet: executing unload_modules_local modules unloaded. mnt..Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions ../libcfs/libcfs/libcfs options: 'cpu_npartitions=2' ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' quota/lquota options: 'hash_lqs_cur_bits=3' loading modules on: 'oleg216-server' oleg216-server: oleg216-server.virtnet: executing load_modules_local oleg216-server: Loading modules from /home/green/git/lustre-release/lustre oleg216-server: detected 4 online CPUs by sysfs oleg216-server: Force libcfs to create 2 CPU partitions oleg216-server: ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' oleg216-server: quota/lquota options: 'hash_lqs_cur_bits=3' Checking servers environments Checking clients oleg216-client.virtnet environments Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions loading modules on: 'oleg216-server' oleg216-server: oleg216-server.virtnet: executing load_modules_local oleg216-server: Loading modules from /home/green/git/lustre-release/lustre oleg216-server: detected 4 online CPUs by sysfs oleg216-server: Force libcfs to create 2 CPU partitions Setup mgs, mdt, osts Starting mds1: -o localrecov lustre-mdt1/mdt1 /mnt/lustre-mds1 oleg216-server: oleg216-server.virtnet: executing set_default_debug all all pdsh@oleg216-client: oleg216-server: ssh exited with exit code 1 Started lustre-MDT0000 Starting ost1: -o localrecov lustre-ost1/ost1 /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg216-server: oleg216-server.virtnet: executing set_default_debug all all pdsh@oleg216-client: oleg216-server: ssh exited with exit code 1 Started lustre-OST0000 Starting ost2: -o localrecov lustre-ost2/ost2 /mnt/lustre-ost2 seq.cli-lustre-OST0001-super.width=65536 oleg216-server: oleg216-server.virtnet: executing set_default_debug all all pdsh@oleg216-client: oleg216-server: ssh exited with exit code 1 Started lustre-OST0001 Starting client: oleg216-client.virtnet: -o user_xattr,flock oleg216-server@tcp:/lustre /mnt/lustre Starting client oleg216-client.virtnet: -o user_xattr,flock oleg216-server@tcp:/lustre /mnt/lustre Started clients oleg216-client.virtnet: 192.168.202.116@tcp:/lustre on /mnt/lustre type lustre (rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project) Using TIMEOUT=20 osc.lustre-OST0000-osc-ffff8800b603b000.idle_timeout=debug osc.lustre-OST0001-osc-ffff8800b603b000.idle_timeout=debug disable quota as required done PASS 133f (68s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 133g: Check reads/writes of server lustre proc files with bad area io ========================================================== 21:27:27 (1713490047) cln..Stopping clients: oleg216-client.virtnet /mnt/lustre (opts:) Stopping client oleg216-client.virtnet /mnt/lustre opts: Stopping clients: oleg216-client.virtnet /mnt/lustre2 (opts:) Stopping /mnt/lustre-mds1 (opts:-f) on oleg216-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg216-server Stopping /mnt/lustre-ost2 (opts:-f) on oleg216-server unloading modules on: 'oleg216-server' oleg216-server: oleg216-server.virtnet: executing unload_modules_local modules unloaded. mnt..Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions ../libcfs/libcfs/libcfs options: 'cpu_npartitions=2' ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' quota/lquota options: 'hash_lqs_cur_bits=3' loading modules on: 'oleg216-server' oleg216-server: oleg216-server.virtnet: executing load_modules_local oleg216-server: Loading modules from /home/green/git/lustre-release/lustre oleg216-server: detected 4 online CPUs by sysfs oleg216-server: Force libcfs to create 2 CPU partitions oleg216-server: ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' oleg216-server: quota/lquota options: 'hash_lqs_cur_bits=3' Checking servers environments Checking clients oleg216-client.virtnet environments Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions loading modules on: 'oleg216-server' oleg216-server: oleg216-server.virtnet: executing load_modules_local oleg216-server: Loading modules from /home/green/git/lustre-release/lustre oleg216-server: detected 4 online CPUs by sysfs oleg216-server: Force libcfs to create 2 CPU partitions Setup mgs, mdt, osts Starting mds1: -o localrecov lustre-mdt1/mdt1 /mnt/lustre-mds1 oleg216-server: oleg216-server.virtnet: executing set_default_debug all all pdsh@oleg216-client: oleg216-server: ssh exited with exit code 1 Started lustre-MDT0000 Starting ost1: -o localrecov lustre-ost1/ost1 /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg216-server: oleg216-server.virtnet: executing set_default_debug all all pdsh@oleg216-client: oleg216-server: ssh exited with exit code 1 Started lustre-OST0000 Starting ost2: -o localrecov lustre-ost2/ost2 /mnt/lustre-ost2 seq.cli-lustre-OST0001-super.width=65536 oleg216-server: oleg216-server.virtnet: executing set_default_debug all all pdsh@oleg216-client: oleg216-server: ssh exited with exit code 1 Started lustre-OST0001 Starting client: oleg216-client.virtnet: -o user_xattr,flock oleg216-server@tcp:/lustre /mnt/lustre Starting client oleg216-client.virtnet: -o user_xattr,flock oleg216-server@tcp:/lustre /mnt/lustre Started clients oleg216-client.virtnet: 192.168.202.116@tcp:/lustre on /mnt/lustre type lustre (rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project) Using TIMEOUT=20 osc.lustre-OST0000-osc-ffff88012b4cd800.idle_timeout=debug osc.lustre-OST0001-osc-ffff88012b4cd800.idle_timeout=debug disable quota as required done PASS 133g (83s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 133h: Proc files should end with newlines ========================================================== 21:28:53 (1713490133) PASS 133h (240s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 134a: Server reclaims locks when reaching lock_reclaim_threshold ========================================================== 21:32:56 (1713490376) total: 1000 open/close in 3.09 seconds: 323.12 ops/second fail_loc=0x327 fail_val=500 sleep 10 seconds ... fail_loc=0 fail_val=0 - unlinked 0 (time 1713490392 ; total 0 ; last 0) total: 1000 unlinks in 1 seconds: 1000.000000 unlinks/second PASS 134a (19s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 134b: Server rejects lock request when reaching lock_limit_mb ========================================================== 21:33:17 (1713490397) ldlm.lock_reclaim_threshold_mb=0 fail_loc=0x328 fail_val=500 debug=+trace Sleep 20 seconds ... fail_loc=0 fail_val=0 oleg216-server: error: set_param: setting /sys/kernel/debug/lustre/ldlm/lock_reclaim_threshold_mb=746m: Invalid argument oleg216-server: error: set_param: setting 'ldlm/lock_reclaim_threshold_mb'='746m': Invalid argument pdsh@oleg216-client: oleg216-server: ssh exited with exit code 22 - open/close 507 (time 1713490409.61 total 10.14 last 50.00) - open/close 517 (time 1713490419.67 total 20.21 last 0.99) total: 600 open/close in 20.52 seconds: 29.24 ops/second - unlinked 0 (time 1713490421 ; total 0 ; last 0) total: 600 unlinks in 1 seconds: 600.000000 unlinks/second PASS 134b (27s) debug_raw_pointers=0 debug_raw_pointers=0 SKIP: sanity test_135 skipping SLOW test 135 SKIP: sanity test_136 skipping SLOW test 136 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 140: Check reasonable stack depth (shouldn't LBUG) ============================================================== 21:33:48 (1713490428) The symlink depth = 40 open symlink_self returns 40 PASS 140 (10s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 150a: truncate/append tests =============== 21:34:00 (1713490440) 1+0 records in 1+0 records out 6096 bytes (6.1 kB) copied, 0.000525474 s, 11.6 MB/s Stopping client oleg216-client.virtnet /mnt/lustre (opts:) Starting client: oleg216-client.virtnet: -o user_xattr,flock oleg216-server@tcp:/lustre /mnt/lustre Filesystem 1024-blocks Used Available Capacity Mounted on 192.168.202.116@tcp:/lustre 7542784 21504 7515136 1% /mnt/lustre sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 150a (21s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 150b: Verify fallocate (prealloc) functionality ========================================================== 21:34:23 (1713490463) fallocate on zfs doesn't consume space fallocate not supported SKIP: sanity test_150b need >= 2.13.57 and ldiskfs for fallocate SKIP 150b (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 150bb: Verify fallocate modes both zero space ========================================================== 21:34:26 (1713490466) fallocate on zfs doesn't consume space fallocate not supported SKIP: sanity test_150bb need >= 2.13.57 and ldiskfs for fallocate SKIP 150bb (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 150c: Verify fallocate Size and Blocks ==== 21:34:29 (1713490469) fallocate on zfs doesn't consume space fallocate not supported SKIP: sanity test_150c need >= 2.13.57 and ldiskfs for fallocate SKIP 150c (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 150d: Verify fallocate Size and Blocks - Non zero start ========================================================== 21:34:31 (1713490471) fallocate on zfs doesn't consume space fallocate not supported SKIP: sanity test_150d need >= 2.13.57 and ldiskfs for fallocate SKIP 150d (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 150e: Verify 60% of available OST space consumed by fallocate ========================================================== 21:34:34 (1713490474) fallocate on zfs doesn't consume space fallocate not supported SKIP: sanity test_150e need >= 2.13.57 and ldiskfs for fallocate SKIP 150e (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 150f: Verify fallocate punch functionality ========================================================== 21:34:37 (1713490477) SKIP: sanity test_150f LU-14160: punch mode is not implemented on OSD ZFS SKIP 150f (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 150g: Verify fallocate punch on large range ========================================================== 21:34:40 (1713490480) SKIP: sanity test_150g LU-14160: punch mode is not implemented on OSD ZFS SKIP 150g (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 150h: Verify extend fallocate updates the file size ========================================================== 21:34:43 (1713490483) fallocate on zfs doesn't consume space fallocate not supported SKIP: sanity test_150h need >= 2.13.57 and ldiskfs for fallocate SKIP 150h (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 151: test cache on oss and controls ========================================================================================= 21:34:46 (1713490486) oleg216-server: error: get_param: param_path 'osd-*/lustre-OST*/read_cache_enable': No such file or directory pdsh@oleg216-client: oleg216-server: ssh exited with exit code 2 SKIP: sanity test_151 not cache-capable obdfilter SKIP 151 (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 152: test read/write with enomem ====================================================================================== 21:34:50 (1713490490) fail_loc=0x80000226 1+0 records in 1+0 records out 6096 bytes (6.1 kB) copied, 0.00032562 s, 18.7 MB/s fail_loc=0 fail_loc=0x80000226 fail_loc=0 PASS 152 (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 153: test if fdatasync does not crash ================================================================================= 21:34:53 (1713490493) PASS 153 (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 154A: lfs path2fid and fid2path basic checks ========================================================== 21:34:57 (1713490497) /mnt/lustre [0x2000013a2:0x3:0x0] /mnt/lustre/// [0x2000013a2:0x3:0x0] /mnt/lustre/f154A.sanity [0x2000013a2:0x3:0x0] lfs fid2path: cannot resolve mount point for '/mnt/lustre_wrong': No such device PASS 154A (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 154B: verify the ll_decode_linkea tool ==== 21:35:01 (1713490501) PFID: [0x2000013a2:0x4:0x0], name: f154B.sanity PASS 154B (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 154a: Open-by-FID ========================= 21:35:05 (1713490505) stat fid [0x2000013a2:0x6:0x0] File: '/mnt/lustre/.lustre/fid/[0x2000013a2:0x6:0x0]' Size: 159 Blocks: 1 IO Block: 4194304 regular file Device: 2c54f966h/743766374d Inode: 144115272398143494 Links: 1 Access: (0644/-rw-r--r--) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2024-04-18 21:35:05.000000000 -0400 Modify: 2024-04-18 21:35:05.000000000 -0400 Change: 2024-04-18 21:35:05.000000000 -0400 Birth: - touch fid [0x2000013a2:0x6:0x0] write to fid [0x2000013a2:0x6:0x0] read fid [0x2000013a2:0x6:0x0] append write to fid [0x2000013a2:0x6:0x0] rename fid [0x2000013a2:0x6:0x0] mv: cannot move '/mnt/lustre/.lustre/fid/[0x2000013a2:0x6:0x0]' to '/mnt/lustre/f154a.sanity.1': Operation not permitted mv: cannot move '/mnt/lustre/f154a.sanity.1' to '/mnt/lustre/.lustre/fid/[0x2000013a2:0x6:0x0]': Operation not permitted truncate fid [0x2000013a2:0x6:0x0] link fid [0x2000013a2:0x6:0x0] uid=500(sanityusr) gid=500(sanityusr) groups=500(sanityusr) setfacl fid [0x2000013a2:0x6:0x0] getfacl fid [0x2000013a2:0x6:0x0] getfacl: Removing leading '/' from absolute path names # file: mnt/lustre/.lustre/fid/[0x2000013a2:0x6:0x0] # owner: root # group: root user::rw- user:sanityusr:rwx group::r-- mask::rwx other::r-- unlink fid [0x2000013a2:0x6:0x0] unlink: cannot unlink '/mnt/lustre/.lustre/fid/[0x2000013a2:0x6:0x0]': Operation not permitted mknod fid [0x2000013a2:0x6:0x0] mknod: '/mnt/lustre/.lustre/fid/[0x2000013a2:0x6:0x0]': Operation not permitted stat non-exist fid [0xf00000400:0x1:0x0] stat: cannot stat '/mnt/lustre/.lustre/fid/[0xf00000400:0x1:0x0]': No such file or directory write to non-exist fid [0xf00000400:0x1:0x0] /home/green/git/lustre-release/lustre/tests/sanity.sh: line 16991: /mnt/lustre/.lustre/fid/[0xf00000400:0x1:0x0]: Operation not permitted link new fid [0xf00000400:0x1:0x0] ln: failed to create hard link '/mnt/lustre/.lustre/fid/[0xf00000400:0x1:0x0]' => '/mnt/lustre/f154a.sanity': Operation not permitted ls [0x2000013a2:0xa:0x0] f154a.sanity touch [0x2000013a2:0xa:0x0]/f154a.sanity.1 touch /mnt/lustre/.lustre/fid/f154a.sanity touch: setting times of '/mnt/lustre/.lustre/fid/f154a.sanity': No such file or directory setxattr to /mnt/lustre/.lustre/fid listxattr for /mnt/lustre/.lustre/fid getfattr: Removing leading '/' from absolute path names # file: mnt/lustre/.lustre/fid trusted.lma=0sAAAAAAAAAAACAAAAAgAAAAIAAAAAAAAA trusted.name1="value1" trusted.version=0sfRMAAAMAAAA= delxattr from /mnt/lustre/.lustre/fid touch invalid fid: /mnt/lustre/.lustre/fid/[0x200000400:0x2:0x3] touch: setting times of '/mnt/lustre/.lustre/fid/[0x200000400:0x2:0x3]': No such file or directory touch non-normal fid: /mnt/lustre/.lustre/fid/[0x1:0x2:0x0] touch: setting times of '/mnt/lustre/.lustre/fid/[0x1:0x2:0x0]': No such file or directory rename d154a.sanity to /mnt/lustre/.lustre/fid rename '/mnt/lustre/d154a.sanity' returned -1: Operation not permitted change mode of /mnt/lustre/.lustre/fid to 777 restore mode of /mnt/lustre/.lustre/fid to 100 Succeed in opening file "/mnt/lustre/f154a.sanity-2"(flags=O_LOV_DELAY_CREATE) cp /etc/passwd /mnt/lustre/.lustre/fid/[0x2000013a2:0x10:0x0] cp /etc/passwd /mnt/lustre/f154a.sanity-2 diff /etc/passwd /mnt/lustre/.lustre/fid/[0x2000013a2:0x10:0x0] rm: cannot remove '/mnt/lustre/.lustre/lost+found/MDT0000': Operation not permitted rm: cannot remove '/mnt/lustre/.lustre/fid': Operation not permitted touch: setting times of '/mnt/lustre/.lustre/file': No such file or directory mkdir: cannot create directory '/mnt/lustre/.lustre/dir': Operation not permitted PASS 154a (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 154b: Open-by-FID for remote directory ==== 21:35:10 (1713490510) SKIP: sanity test_154b needs >= 2 MDTs SKIP 154b (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 154c: lfs path2fid and fid2path multiple arguments ========================================================== 21:35:13 (1713490513) PASS 154c (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 154d: Verify open file fid ================ 21:35:17 (1713490517) mdt.lustre-MDT0000.exports.192.168.202.16@tcp.open_files= [0x2000013a2:0x1:0x0] [0x200000002:0x1:0x0] [0x200000002:0x3:0x0] [0x2000013a2:0x16:0x0] PASS 154d (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 154e: .lustre is not returned by readdir == 21:35:21 (1713490521) PASS 154e (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 154f: get parent fids by reading link ea == 21:35:25 (1713490525) [0x2000013a2:0x19:0x0]/f154f.sanity [0x2000013a2:0x1b:0x0]/link [0x2000013a2:0x19:0x0]/f154f.sanity [0x2000013a2:0x1b:0x0]/link [0x2000013a2:0x19:0x0]/f154f.sanity [0x2000013a2:0x1b:0x0]/link [0x2000013a2:0x19:0x0]/f154f.sanity [0x2000013a2:0x1b:0x0]/link [0x200000007:0x1:0x0]/f llite.lustre-ffff88012a472800.xattr_cache=1 [0x2000013a2:0x1b:0x0]/link [0x2000013a2:0x1b:0x0]/f154f.sanity.moved PASS 154f (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 154g: various llapi FID tests ============= 21:35:30 (1713490530) Starting test test10 at 1713490530 Finishing test test10 at 1713490530 Starting test test11 at 1713490531 Finishing test test11 at 1713490531 Starting test test12 at 1713490531 Finishing test test12 at 1713490531 Starting test test20 at 1713490531 Finishing test test20 at 1713490759 Starting test test30 at 1713490811 Was able to store 155 links in the EA Finishing test test30 at 1713490823 Starting test test31 at 1713490829 Finishing test test31 at 1713490829 Starting test test40 at 1713490829 Finishing test test40 at 1713490829 Starting test test41 at 1713490829 Finishing test test41 at 1713490829 Starting test test42 at 1713490829 Finishing test test42 at 1713490831 PASS 154g (304s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 154h: Verify interactive path2fid ========= 21:40:37 (1713490837) [0x2000013a2:0x879:0x0] PASS 154h (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 155a: Verify small file correctness: read cache:on write_cache:on ========================================================== 21:40:41 (1713490841) oleg216-server: error: get_param: param_path 'osd-*/*/writethrough_cache_enable': No such file or directory oleg216-server: error: get_param: param_path 'osd-*/*/writethrough_cache_enable': No such file or directory oleg216-server: error: set_param: param_path 'osd-*/lustre-OST*/read_cache_enable': No such file or directory oleg216-server: error: set_param: setting 'osd-*/lustre-OST*/read_cache_enable'='1': No such file or directory pdsh@oleg216-client: oleg216-server: ssh exited with exit code 2 oleg216-server: error: set_param: param_path 'osd-*/lustre-OST*/writethrough_cache_enable': No such file or directory oleg216-server: error: set_param: setting 'osd-*/lustre-OST*/writethrough_cache_enable'='1': No such file or directory pdsh@oleg216-client: oleg216-server: ssh exited with exit code 2 1+0 records in 1+0 records out 6096 bytes (6.1 kB) copied, 0.000331973 s, 18.4 MB/s oleg216-server: error: set_param: setting : Invalid argument pdsh@oleg216-client: oleg216-server: ssh exited with exit code 22 oleg216-server: error: set_param: setting : Invalid argument pdsh@oleg216-client: oleg216-server: ssh exited with exit code 22 PASS 155a (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 155b: Verify small file correctness: read cache:on write_cache:off ========================================================== 21:40:47 (1713490847) oleg216-server: error: get_param: param_path 'osd-*/*/writethrough_cache_enable': No such file or directory oleg216-server: error: get_param: param_path 'osd-*/*/writethrough_cache_enable': No such file or directory oleg216-server: error: set_param: param_path 'osd-*/lustre-OST*/read_cache_enable': No such file or directory oleg216-server: error: set_param: setting 'osd-*/lustre-OST*/read_cache_enable'='1': No such file or directory pdsh@oleg216-client: oleg216-server: ssh exited with exit code 2 oleg216-server: error: set_param: param_path 'osd-*/lustre-OST*/writethrough_cache_enable': No such file or directory oleg216-server: error: set_param: setting 'osd-*/lustre-OST*/writethrough_cache_enable'='0': No such file or directory pdsh@oleg216-client: oleg216-server: ssh exited with exit code 2 1+0 records in 1+0 records out 6096 bytes (6.1 kB) copied, 0.000286447 s, 21.3 MB/s oleg216-server: error: set_param: setting : Invalid argument pdsh@oleg216-client: oleg216-server: ssh exited with exit code 22 oleg216-server: error: set_param: setting : Invalid argument pdsh@oleg216-client: oleg216-server: ssh exited with exit code 22 PASS 155b (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 155c: Verify small file correctness: read cache:off write_cache:on ========================================================== 21:40:52 (1713490852) oleg216-server: error: get_param: param_path 'osd-*/*/writethrough_cache_enable': No such file or directory oleg216-server: error: get_param: param_path 'osd-*/*/writethrough_cache_enable': No such file or directory oleg216-server: error: set_param: param_path 'osd-*/lustre-OST*/read_cache_enable': No such file or directory oleg216-server: error: set_param: setting 'osd-*/lustre-OST*/read_cache_enable'='0': No such file or directory pdsh@oleg216-client: oleg216-server: ssh exited with exit code 2 oleg216-server: error: set_param: param_path 'osd-*/lustre-OST*/writethrough_cache_enable': No such file or directory oleg216-server: error: set_param: setting 'osd-*/lustre-OST*/writethrough_cache_enable'='1': No such file or directory pdsh@oleg216-client: oleg216-server: ssh exited with exit code 2 1+0 records in 1+0 records out 6096 bytes (6.1 kB) copied, 0.000241403 s, 25.3 MB/s oleg216-server: error: set_param: setting : Invalid argument pdsh@oleg216-client: oleg216-server: ssh exited with exit code 22 oleg216-server: error: set_param: setting : Invalid argument pdsh@oleg216-client: oleg216-server: ssh exited with exit code 22 PASS 155c (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 155d: Verify small file correctness: read cache:off write_cache:off ========================================================== 21:40:58 (1713490858) oleg216-server: error: get_param: param_path 'osd-*/*/writethrough_cache_enable': No such file or directory oleg216-server: error: get_param: param_path 'osd-*/*/writethrough_cache_enable': No such file or directory oleg216-server: error: set_param: param_path 'osd-*/lustre-OST*/read_cache_enable': No such file or directory oleg216-server: error: set_param: setting 'osd-*/lustre-OST*/read_cache_enable'='0': No such file or directory pdsh@oleg216-client: oleg216-server: ssh exited with exit code 2 oleg216-server: error: set_param: param_path 'osd-*/lustre-OST*/writethrough_cache_enable': No such file or directory oleg216-server: error: set_param: setting 'osd-*/lustre-OST*/writethrough_cache_enable'='0': No such file or directory pdsh@oleg216-client: oleg216-server: ssh exited with exit code 2 1+0 records in 1+0 records out 6096 bytes (6.1 kB) copied, 0.000248646 s, 24.5 MB/s oleg216-server: error: set_param: setting : Invalid argument pdsh@oleg216-client: oleg216-server: ssh exited with exit code 22 oleg216-server: error: set_param: setting : Invalid argument pdsh@oleg216-client: oleg216-server: ssh exited with exit code 22 PASS 155d (5s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 155e: Verify big file correctness: read cache:on write_cache:on ========================================================== 21:41:05 (1713490865) oleg216-server: error: get_param: param_path 'osd-*/*/writethrough_cache_enable': No such file or directory oleg216-server: error: get_param: param_path 'osd-*/*/writethrough_cache_enable': No such file or directory oleg216-server: error: set_param: param_path 'osd-*/lustre-OST*/read_cache_enable': No such file or directory oleg216-server: error: set_param: setting 'osd-*/lustre-OST*/read_cache_enable'='1': No such file or directory pdsh@oleg216-client: oleg216-server: ssh exited with exit code 2 oleg216-server: error: set_param: param_path 'osd-*/lustre-OST*/writethrough_cache_enable': No such file or directory oleg216-server: error: set_param: setting 'osd-*/lustre-OST*/writethrough_cache_enable'='1': No such file or directory pdsh@oleg216-client: oleg216-server: ssh exited with exit code 2 sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete OST kbytes available: 3757056 3760128 Min free space: OST 0: 3757056 Max free space: OST 1: 3760128 OSS cache size: 65536 KB Large file size: 131072 KB 1024+0 records in 1024+0 records out 134217728 bytes (134 MB) copied, 1.08381 s, 124 MB/s -rw-r--r-- 1 root root 128M Apr 18 21:41 /mnt/lustre/f155e.sanity -rw-r--r-- 1 root root 128M Apr 18 21:41 /tmp/f155e.sanity oleg216-server: error: set_param: setting : Invalid argument pdsh@oleg216-client: oleg216-server: ssh exited with exit code 22 oleg216-server: error: set_param: setting : Invalid argument pdsh@oleg216-client: oleg216-server: ssh exited with exit code 22 PASS 155e (27s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 155f: Verify big file correctness: read cache:on write_cache:off ========================================================== 21:41:34 (1713490894) oleg216-server: error: get_param: param_path 'osd-*/*/writethrough_cache_enable': No such file or directory oleg216-server: error: get_param: param_path 'osd-*/*/writethrough_cache_enable': No such file or directory oleg216-server: error: set_param: param_path 'osd-*/lustre-OST*/read_cache_enable': No such file or directory oleg216-server: error: set_param: setting 'osd-*/lustre-OST*/read_cache_enable'='1': No such file or directory pdsh@oleg216-client: oleg216-server: ssh exited with exit code 2 oleg216-server: error: set_param: param_path 'osd-*/lustre-OST*/writethrough_cache_enable': No such file or directory oleg216-server: error: set_param: setting 'osd-*/lustre-OST*/writethrough_cache_enable'='0': No such file or directory pdsh@oleg216-client: oleg216-server: ssh exited with exit code 2 sleep 5 for ZFS zfs Waiting for MDT destroys to complete OST kbytes available: 3757056 3760128 Min free space: OST 0: 3757056 Max free space: OST 1: 3760128 OSS cache size: 65536 KB Large file size: 131072 KB 1024+0 records in 1024+0 records out 134217728 bytes (134 MB) copied, 1.08187 s, 124 MB/s -rw-r--r-- 1 root root 128M Apr 18 21:41 /mnt/lustre/f155f.sanity -rw-r--r-- 1 root root 128M Apr 18 21:41 /tmp/f155f.sanity oleg216-server: error: set_param: setting : Invalid argument pdsh@oleg216-client: oleg216-server: ssh exited with exit code 22 oleg216-server: error: set_param: setting : Invalid argument pdsh@oleg216-client: oleg216-server: ssh exited with exit code 22 PASS 155f (19s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 155g: Verify big file correctness: read cache:off write_cache:on ========================================================== 21:41:55 (1713490915) oleg216-server: error: get_param: param_path 'osd-*/*/writethrough_cache_enable': No such file or directory oleg216-server: error: get_param: param_path 'osd-*/*/writethrough_cache_enable': No such file or directory oleg216-server: error: set_param: param_path 'osd-*/lustre-OST*/read_cache_enable': No such file or directory oleg216-server: error: set_param: setting 'osd-*/lustre-OST*/read_cache_enable'='0': No such file or directory pdsh@oleg216-client: oleg216-server: ssh exited with exit code 2 oleg216-server: error: set_param: param_path 'osd-*/lustre-OST*/writethrough_cache_enable': No such file or directory oleg216-server: error: set_param: setting 'osd-*/lustre-OST*/writethrough_cache_enable'='1': No such file or directory pdsh@oleg216-client: oleg216-server: ssh exited with exit code 2 sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete OST kbytes available: 3757056 3760128 Min free space: OST 0: 3757056 Max free space: OST 1: 3760128 OSS cache size: 65536 KB Large file size: 131072 KB 1024+0 records in 1024+0 records out 134217728 bytes (134 MB) copied, 1.08768 s, 123 MB/s -rw-r--r-- 1 root root 128M Apr 18 21:42 /mnt/lustre/f155g.sanity -rw-r--r-- 1 root root 128M Apr 18 21:42 /tmp/f155g.sanity oleg216-server: error: set_param: setting : Invalid argument pdsh@oleg216-client: oleg216-server: ssh exited with exit code 22 oleg216-server: error: set_param: setting : Invalid argument pdsh@oleg216-client: oleg216-server: ssh exited with exit code 22 PASS 155g (28s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 155h: Verify big file correctness: read cache:off write_cache:off ========================================================== 21:42:25 (1713490945) oleg216-server: error: get_param: param_path 'osd-*/*/writethrough_cache_enable': No such file or directory oleg216-server: error: get_param: param_path 'osd-*/*/writethrough_cache_enable': No such file or directory oleg216-server: error: set_param: param_path 'osd-*/lustre-OST*/read_cache_enable': No such file or directory oleg216-server: error: set_param: setting 'osd-*/lustre-OST*/read_cache_enable'='0': No such file or directory pdsh@oleg216-client: oleg216-server: ssh exited with exit code 2 oleg216-server: error: set_param: param_path 'osd-*/lustre-OST*/writethrough_cache_enable': No such file or directory oleg216-server: error: set_param: setting 'osd-*/lustre-OST*/writethrough_cache_enable'='0': No such file or directory pdsh@oleg216-client: oleg216-server: ssh exited with exit code 2 sleep 5 for ZFS zfs Waiting for MDT destroys to complete OST kbytes available: 3757056 3760128 Min free space: OST 0: 3757056 Max free space: OST 1: 3760128 OSS cache size: 65536 KB Large file size: 131072 KB 1024+0 records in 1024+0 records out 134217728 bytes (134 MB) copied, 1.12282 s, 120 MB/s -rw-r--r-- 1 root root 128M Apr 18 21:42 /mnt/lustre/f155h.sanity -rw-r--r-- 1 root root 128M Apr 18 21:42 /tmp/f155h.sanity oleg216-server: error: set_param: setting : Invalid argument pdsh@oleg216-client: oleg216-server: ssh exited with exit code 22 oleg216-server: error: set_param: setting : Invalid argument pdsh@oleg216-client: oleg216-server: ssh exited with exit code 22 PASS 155h (18s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 156: Verification of tunables ============= 21:42:45 (1713490965) SKIP: sanity test_156 LU-1956/LU-2261: stats not implemented on OSD ZFS SKIP 156 (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 160a: changelog sanity ==================== 21:42:47 (1713490967) mdd.lustre-MDT0000.changelog_mask=+hsm Registered 1 changelog users: 'cl1' lustre-MDT0000: clear the changelog for cl1 of all records verifying changelog mask mdd.lustre-MDT0000.changelog_mask=-MKDIR mdd.lustre-MDT0000.changelog_mask=-CLOSE mdd.lustre-MDT0000.changelog_mask=+MKDIR mdd.lustre-MDT0000.changelog_mask=+CLOSE verifying target fid verifying parent fid getting records for cl1 current_index: 16 ID index (idle) mask cl1 4 (2) lustre-MDT0000: clear the changelog for cl1 to record #7 verifying user clear: 4 + 3 == 7 lustre-MDT0000.12 06UNLNK 01:42:50.364759624 2024.04.19 0x1 t=[0x2000013a2:0x888:0x0] j=rm.0 ef=0xf u=0:0 nid=192.168.202.16@tcp p=[0x2000013a2:0x883:0x0] desktop.jpg lustre-MDT0000.13 01CREAT 01:42:51.250788189 2024.04.19 0x0 t=[0x2000013a2:0x88a:0x0] j=bash.0 ef=0xf u=0:0 nid=192.168.202.16@tcp p=[0x2000013a2:0x885:0x0] file lustre-MDT0000.14 02MKDIR 01:42:52.277966970 2024.04.19 0x0 t=[0x2000013a2:0x88b:0x0] j=mkdir.0 ef=0xf u=0:0 nid=192.168.202.16@tcp p=[0x2000013a2:0x884:0x0] sofia lustre-MDT0000.15 13TRUNC 01:42:52.287351238 2024.04.19 0xe t=[0x2000013a2:0x88a:0x0] j=bash.0 ef=0xf u=0:0 nid=192.168.202.16@tcp p=[0x2000013a2:0x885:0x0] lustre-MDT0000.16 11CLOSE 01:42:52.299347069 2024.04.19 0x242 t=[0x2000013a2:0x88a:0x0] j=bash.0 ef=0xf u=0:0 nid=192.168.202.16@tcp verifying user min purge: 7 + 1 == 8 lustre-MDT0000: clear the changelog for cl1 of all records Stopping /mnt/lustre-mds1 (opts:) on oleg216-server Starting mds1: -o localrecov lustre-mdt1/mdt1 /mnt/lustre-mds1 oleg216-server: oleg216-server.virtnet: executing set_default_debug all all pdsh@oleg216-client: oleg216-server: ssh exited with exit code 1 Started lustre-MDT0000 verifying index survives MDT restart: 16 == 16 verifying users from this test are deregistered lustre-MDT0000: clear the changelog for cl1 of all records lustre-MDT0000: Deregistered changelog user #1 current_index: 16 ID index (idle) mask other changelog users; can't verify off lustre-MDT0000: changelog user 'cl1' not found PASS 160a (16s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 160b: Verify that very long rename doesn't crash in changelog ========================================================== 21:43:05 (1713490985) mdd.lustre-MDT0000.changelog_mask=+hsm Registered 1 changelog users: 'cl2' creating very long named file renaming very long named file lustre-MDT0000.19 08RENME 01:43:08.627269670 2024.04.19 0x0 t=[0:0x0:0x0] j=mv.0 ef=0xf u=0:0 nid=192.168.202.16@tcp p=[0x200000007:0x1:0x0] bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb s=[0x2000013a2:0x88c:0x0] sp=[0x200000007:0x1:0x0] aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa lustre-MDT0000: clear the changelog for cl2 of all records lustre-MDT0000: Deregistered changelog user #2 PASS 160b (5s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 160c: verify that changelog log catch the truncate event ========================================================== 21:43:12 (1713490992) mdd.lustre-MDT0000.changelog_mask=+hsm Registered 1 changelog users: 'cl3' mdd.lustre-MDT0000.changelog_mask=-TRUNC mdd.lustre-MDT0000.changelog_mask=+TRUNC lustre-MDT0000.21 02MKDIR 01:43:14.794401748 2024.04.19 0x0 t=[0x2000013a2:0x88d:0x0] j=mkdir.0 ef=0xf u=0:0 nid=192.168.202.16@tcp p=[0x200000007:0x1:0x0] d160c.sanity lustre-MDT0000.22 01CREAT 01:43:14.802857098 2024.04.19 0x0 t=[0x2000013a2:0x88e:0x0] j=mcreate.0 ef=0xf u=0:0 nid=192.168.202.16@tcp p=[0x2000013a2:0x88d:0x0] foo_160c lustre-MDT0000.23 14SATTR 01:43:15.095715932 2024.04.19 0xe t=[0x2000013a2:0x88e:0x0] j=truncate.0 ef=0xf u=0:0 nid=192.168.202.16@tcp p=[0x2000013a2:0x88d:0x0] lustre-MDT0000.24 13TRUNC 01:43:15.393280859 2024.04.19 0xe t=[0x2000013a2:0x88e:0x0] j=truncate.0 ef=0xf u=0:0 nid=192.168.202.16@tcp p=[0x2000013a2:0x88d:0x0] lustre-MDT0000: clear the changelog for cl3 of all records lustre-MDT0000: Deregistered changelog user #3 PASS 160c (5s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 160d: verify that changelog log catch the migrate event ========================================================== 21:43:19 (1713490999) SKIP: sanity test_160d needs >= 2 MDTs SKIP 160d (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 160e: changelog negative testing (should return errors) ========================================================== 21:43:23 (1713491003) mdd.lustre-MDT0000.changelog_mask=+hsm Registered 1 changelog users: 'cl4' pdsh@oleg216-client: oleg216-server: ssh exited with exit code 4 deregister an existing changelog user usage: --device changelog_deregister [|cl...] [--help|-h] [--user|-u ] run after connecting to device --device oleg216-server: error: changelog_deregister: User not found pdsh@oleg216-client: oleg216-server: ssh exited with exit code 2 lfs changelog_clear: cannot purge records for 'cl4': Invalid argument (22) changelog_clear: record out of range: 1000000000 lustre-MDT0000: clear the changelog for cl4 of all records lustre-MDT0000: Deregistered changelog user #4 PASS 160e (5s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 160f: changelog garbage collect (timestamped users) ========================================================== 21:43:30 (1713491010) mdd.lustre-MDT0000.changelog_mask=+hsm Registered 1 changelog users: 'cl5' mdd.lustre-MDT0000.changelog_mask=+hsm Registered 1 changelog users: 'cl5 cl6' 1713491013: creating first files mdd.lustre-MDT0000.changelog_max_idle_time=10 mdd.lustre-MDT0000.changelog_gc=1 mdd.lustre-MDT0000.changelog_min_gc_interval=2 mdd.lustre-MDT0000.changelog_min_free_cat_entries=3 1713491017: sleep1 5/10s fail_loc=0x1313 fail_val=3 lustre-MDT0000: clear the changelog for cl5 to record #26 mds1: verifying user cl5 clear: 24 + 2 == 26 1713491023: sleep2 2/10s 1713491025: creating 2 files pdsh@oleg216-client: oleg216-server: ssh exited with exit code 1 mds1: 1713491027 verify rec 26+1 == 27 mdd.lustre-MDT0000.changelog_min_free_cat_entries=2 mdd.lustre-MDT0000.changelog_min_gc_interval=3600 mdd.lustre-MDT0000.changelog_gc=1 mdd.lustre-MDT0000.changelog_max_idle_time=2592000 lustre-MDT0000: changelog user 'cl6' not found lustre-MDT0000: clear the changelog for cl5 of all records lustre-MDT0000: Deregistered changelog user #5 PASS 160f (22s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 160g: changelog garbage collect on idle records ========================================================== 21:43:54 (1713491034) mdd.lustre-MDT0000.changelog_mask=+hsm Registered 1 changelog users: 'cl7' mdd.lustre-MDT0000.changelog_mask=+hsm Registered 1 changelog users: 'cl7 cl8' mdd.lustre-MDT0000.changelog_max_idle_indexes=2 mdd.lustre-MDT0000.changelog_gc=1 mdd.lustre-MDT0000.changelog_min_gc_interval=2 lustre-MDT0000: clear the changelog for cl7 to record #31 mds1: verifying user1 cl7 clear: 29 + 2 == 31 sleep 1 for interval pdsh@oleg216-client: oleg216-server: ssh exited with exit code 1 mds1: 1713491043 verify rec 31+1 == 32 mdd.lustre-MDT0000.changelog_min_gc_interval=3600 mdd.lustre-MDT0000.changelog_gc=1 mdd.lustre-MDT0000.changelog_max_idle_indexes=2097446912 lustre-MDT0000: changelog user 'cl8' not found lustre-MDT0000: clear the changelog for cl7 of all records lustre-MDT0000: Deregistered changelog user #7 PASS 160g (13s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 160h: changelog gc thread stop upon umount, orphan records delete ========================================================== 21:44:10 (1713491050) mdd.lustre-MDT0000.changelog_mask=+hsm Registered 1 changelog users: 'cl9' mdd.lustre-MDT0000.changelog_mask=+hsm Registered 1 changelog users: 'cl9 cl10' mdd.lustre-MDT0000.changelog_max_idle_time=10 mdd.lustre-MDT0000.changelog_gc=1 mdd.lustre-MDT0000.changelog_min_gc_interval=2 lustre-MDT0000: clear the changelog for cl9 to record #35 mds1: verifying user cl9 clear: 33 + 2 == 35 fail_loc=0x1316 total: 2 create in 0.02 seconds: 97.13 ops/second Stopping /mnt/lustre-mds1 (opts:) on oleg216-server fail_loc=0 Starting mds1: -o localrecov lustre-mdt1/mdt1 /mnt/lustre-mds1 oleg216-server: oleg216-server.virtnet: executing set_default_debug all all pdsh@oleg216-client: oleg216-server: ssh exited with exit code 1 Started lustre-MDT0000 mds1: verifying first index 35 + 1 == 36 mdd.lustre-MDT0000.changelog_min_gc_interval=3600 mdd.lustre-MDT0000.changelog_gc=1 mdd.lustre-MDT0000.changelog_max_idle_time=2592000 lustre-MDT0000: changelog user 'cl10' not found lustre-MDT0000: clear the changelog for cl9 of all records lustre-MDT0000: Deregistered changelog user #9 PASS 160h (28s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 160i: changelog user register/unregister race ========================================================== 21:44:40 (1713491080) mdd.lustre-MDT0000.changelog_mask=+hsm Registered 1 changelog users: 'cl10' fail_loc=0x10001315 fail_val=1 lustre-MDT0000: clear the changelog for cl10 of all records mdd.lustre-MDT0000.changelog_mask=+hsm lustre-MDT0000: Deregistered changelog user #10 Registered 1 changelog users: 'cl10 cl11' cl11 41 (0) total: 2 create in 0.01 seconds: 154.71 ops/second verify changelogs are on: 43 != 41 lustre-MDT0000: clear the changelog for cl11 of all records lustre-MDT0000: Deregistered changelog user #11 lustre-MDT0000: changelog user 'cl10' not found PASS 160i (9s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 160j: client can be umounted while its chanangelog is being used ========================================================== 21:44:52 (1713491092) Starting client: oleg216-client.virtnet: -o user_xattr,flock oleg216-server@tcp:/lustre /mnt/lustre2 mdd.lustre-MDT0000.changelog_mask=+hsm Registered 1 changelog users: 'cl12' Starting client: oleg216-client.virtnet: -o user_xattr,flock oleg216-server@tcp:/lustre /mnt/lustre lustre-MDT0000: clear the changelog for cl12 of all records lustre-MDT0000: Deregistered changelog user #12 lustre-MDT0000: changelog user 'cl12' not found PASS 160j (5s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 160k: Verify that changelog records are not lost ========================================================== 21:44:59 (1713491099) mdd.lustre-MDT0000.changelog_mask=+hsm Registered 1 changelog users: 'cl13' fail_loc=0x8000015d fail_val=3 lustre-MDT0000.52 07RMDIR 01:45:01.933244317 2024.04.19 0x1 t=[0x200002341:0x3:0x0] j=rmdir.0 ef=0xf u=0:0 nid=192.168.202.16@tcp p=[0x200002341:0x2:0x0] 1 lustre-MDT0000: clear the changelog for cl13 of all records lustre-MDT0000: Deregistered changelog user #13 PASS 160k (11s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 160l: Verify that MTIME changelog records contain the parent FID ========================================================== 21:45:13 (1713491113) mdd.lustre-MDT0000.changelog_mask=+hsm Registered 1 changelog users: 'cl14' mdd.lustre-MDT0000.changelog_mask=-CREAT mdd.lustre-MDT0000.changelog_mask=-CLOSE lustre-MDT0000: clear the changelog for cl14 of all records lustre-MDT0000: Deregistered changelog user #14 PASS 160l (7s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 160m: Changelog clear race ================ 21:45:23 (1713491123) mdd.lustre-MDT0000.changelog_mask=+hsm Registered 1 changelog users: 'cl15' mdd.lustre-MDT0000.changelog_mask=+hsm Registered 1 changelog users: 'cl15 cl16' total: 50 create in 0.25 seconds: 200.25 ops/second - unlinked 0 (time 1713491126 ; total 0 ; last 0) total: 50 unlinks in 0 seconds: inf unlinks/second rm: cannot remove '/mnt/lustre/d160m.sanity': Is a directory fail_loc=0x8000015f fail_val=0 lustre-MDT0000: clear the changelog for cl15 to record #65 lustre-MDT0000: clear the changelog for cl16 of all records lustre-MDT0000: clear the changelog for cl15 of all records lustre-MDT0000: clear the changelog for cl16 of all records lustre-MDT0000: Deregistered changelog user #16 lustre-MDT0000: clear the changelog for cl15 of all records lustre-MDT0000: Deregistered changelog user #15 PASS 160m (10s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 160n: Changelog destroy race ============== 21:45:35 (1713491135) mdd.lustre-MDT0000.changelog_mask=+hsm Registered 1 changelog users: 'cl17' - create 5343 (time 1713491148.16 total 10.00 last 534.29) total: 10000 create in 19.89 seconds: 502.65 ops/second rename '/mnt/lustre/d160n.sanity/f160n.sanity10000' returned -1: No such file or directory - unlinked 0 (time 1713491328 ; total 0 ; last 0) total: 10000 unlinks in 36 seconds: 277.777771 unlinks/second last record 30157 - create 5157 (time 1713491376.12 total 10.00 last 515.64) - create 9942 (time 1713491386.12 total 20.00 last 478.50) total: 10000 create in 20.13 seconds: 496.77 ops/second rename '/mnt/lustre/d160n.sanity/f160n.sanity10000' returned -1: No such file or directory - unlinked 0 (time 1713491558 ; total 0 ; last 0) total: 10000 unlinks in 39 seconds: 256.410248 unlinks/second last record 60157 - create 4775 (time 1713491609.65 total 10.00 last 477.50) - create 9379 (time 1713491619.65 total 20.00 last 460.33) total: 10000 create in 21.31 seconds: 469.17 ops/second rename '/mnt/lustre/d160n.sanity/f160n.sanity10000' returned -1: No such file or directory - unlinked 0 (time 1713491793 ; total 0 ; last 0) total: 10000 unlinks in 40 seconds: 250.000000 unlinks/second last record 90157 fail_loc=0x8000016c fail_val=0 lustre-MDT0000: clear the changelog for cl17 of all records lustre-MDT0000: clear the changelog for cl17 of all records lustre-MDT0000: clear the changelog for cl17 of all records lustre-MDT0000: Deregistered changelog user #17 PASS 160n (707s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 160o: changelog user name and mask ======== 21:57:25 (1713491845) mdd.lustre-MDT0000.changelog_mask=+hsm Registered 1 changelog users: 'cl18-test_160o' oleg216-server: error: changelog_register: Invalid argument pdsh@oleg216-client: oleg216-server: ssh exited with exit code 22 oleg216-server: error: changelog_register: User exists pdsh@oleg216-client: oleg216-server: ssh exited with exit code 17 oleg216-server: error: changelog_register: File name too long pdsh@oleg216-client: oleg216-server: ssh exited with exit code 36 mdd.lustre-MDT0000.changelog_mask=MARK+HSM error: get_param: param_path 'mdd/*/changelog*mask': No such file or directory lustre-MDT0000: clear the changelog for cl18-test_160o of all records mdd.lustre-MDT0000.changelog_mask=+hsm Registered 1 changelog users: 'cl18-test_160o cl22' mdd.lustre-MDT0000.changelog_mask=MARK mdd.lustre-MDT0000.changelog_mask=CLOSE,UNLNK lustre-MDT0000: Deregistered changelog user #18 lustre-MDT0000: clear the changelog for cl22 of all records lustre-MDT0000: Deregistered changelog user #22 lustre-MDT0000: changelog user 'cl18-test_160o' not found PASS 160o (10s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 160p: Changelog orphan cleanup with no users ========================================================== 21:57:38 (1713491858) SKIP: sanity test_160p ldiskfs only test SKIP 160p (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 160q: changelog effective mask is DEFMASK if not set ========================================================== 21:57:41 (1713491861) mdd.lustre-MDT0000.changelog_mask=MARK lustre-MDT0000: Deregistered changelog user #23 PASS 160q (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 160s: changelog garbage collect on idle records * time ========================================================== 21:57:48 (1713491868) fail_loc=0x1314 fail_val=864000 mdd.lustre-MDT0000.changelog_mask=+hsm Registered 1 changelog users: 'cl24' mdd.lustre-MDT0000.changelog_max_idle_indexes=2097446912 mdd.lustre-MDT0000.changelog_max_idle_time=2592000 mdd.lustre-MDT0000.changelog_gc=1 mdd.lustre-MDT0000.changelog_min_gc_interval=2 fail_loc=0x16d fail_val=500000000 sleep 2 for interval pdsh@oleg216-client: oleg216-server: ssh exited with exit code 1 fail_loc=0 mdd.lustre-MDT0000.changelog_min_gc_interval=3600 mdd.lustre-MDT0000.changelog_gc=1 mdd.lustre-MDT0000.changelog_max_idle_time=2592000 mdd.lustre-MDT0000.changelog_max_idle_indexes=2097446912 lustre-MDT0000: changelog user 'cl24' not found PASS 160s (12s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 160t: changelog garbage collect on lack of space ========================================================== 21:58:03 (1713491883) mdd.lustre-MDT0000.changelog_mask=+hsm Registered 1 changelog users: 'cl25-user1' total: 2000 open/close in 9.93 seconds: 201.42 ops/second mdd.lustre-MDT0000.changelog_mask=+hsm Registered 1 changelog users: 'cl25-user1 cl26-user2' total: 500 open/close in 2.36 seconds: 211.76 ops/second mdd.lustre-MDT0000.changelog_gc=1 mdd.lustre-MDT0000.changelog_min_gc_interval=2 sleep 2 for interval fail_loc=0x018c fail_val=1212108 total: 4 open/close in 0.09 seconds: 43.87 ops/second Waiting 20s for '' pdsh@oleg216-client: oleg216-server: ssh exited with exit code 1 fail_loc=0 mdd.lustre-MDT0000.changelog_min_gc_interval=3600 mdd.lustre-MDT0000.changelog_gc=1 lustre-MDT0000: clear the changelog for cl26-user2 of all records lustre-MDT0000: Deregistered changelog user #26 lustre-MDT0000: changelog user 'cl25-user1' not found PASS 160t (29s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 160u: changelog rename record type name and sname strings are correct ========================================================== 21:58:35 (1713491915) mdd.lustre-MDT0000.changelog_mask=+hsm Registered 1 changelog users: 'cl27' creating simple directory tree creating rename/hw file creating very long named file move rename/hw to rename/a/a.hw lustre-MDT0000: clear the changelog for cl27 of all records lustre-MDT0000: Deregistered changelog user #27 PASS 160u (5s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 161a: link ea sanity ====================== 21:58:43 (1713491923) total: 1000 link in 3.87 seconds: 258.29 ops/second 74/1000 links in link EA - unlinked 0 (time 1713491932 ; total 0 ; last 0) total: 1000 unlinks in 3 seconds: 333.333344 unlinks/second PASS 161a (13s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 161b: link ea sanity under remote directory ========================================================== 21:58:58 (1713491938) SKIP: sanity test_161b skipping remote directory test SKIP 161b (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 161c: check CL_RENME[UNLINK] changelog record flags ========================================================== 21:59:01 (1713491941) mdd.lustre-MDT0000.changelog_mask=+hsm Registered 1 changelog users: 'cl28' lustre-MDT0000.500097692 08RENME 01:59:04.250055211 2024.04.19 0x1 t=[0x200002341:0x7f47:0x0] j=mv.0 ef=0xf u=0:0 nid=192.168.202.16@tcp p=[0x200002341:0x7f45:0x0] bar_161c s=[0x200002341:0x7f46:0x0] sp=[0x200002341:0x7f45:0x0] foo_161c lustre-MDT0000: clear the changelog for cl28 of all records rename overwrite target with nlink = 1, changelog flags=0x1 lustre-MDT0000.500097698 08RENME 01:59:04.468126962 2024.04.19 0x0 t=[0x200002341:0x7f46:0x0] j=mv.0 ef=0xf u=0:0 nid=192.168.202.16@tcp p=[0x200002341:0x7f45:0x0] bar_161c s=[0x200002341:0x7f48:0x0] sp=[0x200002341:0x7f45:0x0] foo_161c lustre-MDT0000: clear the changelog for cl28 of all records rename overwrite a target having nlink > 1, changelog record has flags of 0x0 lustre-MDT0000.500097701 08RENME 01:59:04.652695472 2024.04.19 0x0 t=[0:0x0:0x0] j=mv.0 ef=0xf u=0:0 nid=192.168.202.16@tcp p=[0x200002341:0x7f45:0x0] foo2_161c s=[0x200002341:0x7f4a:0x0] sp=[0x200002341:0x7f45:0x0] foo_161c lustre-MDT0000: clear the changelog for cl28 of all records rename doesn't overwrite a target, changelog record has flags of 0x0 lustre-MDT0000.500097702 06UNLNK 01:59:04.796164692 2024.04.19 0x1 t=[0x200002341:0x7f4a:0x0] j=rm.0 ef=0xf u=0:0 nid=192.168.202.16@tcp p=[0x200002341:0x7f45:0x0] foo2_161c lustre-MDT0000: clear the changelog for cl28 of all records unlink a file having nlink = 1, changelog record has flags of 0x1 lustre-MDT0000.500097703 06UNLNK 01:59:04.944624757 2024.04.19 0x1 t=[0x200002341:0x7f46:0x0] j=ln.0 ef=0xf u=0:0 nid=192.168.202.16@tcp p=[0x200002341:0x7f45:0x0] foobar_161c lustre-MDT0000.500097705 06UNLNK 01:59:04.968755894 2024.04.19 0x0 t=[0x200002341:0x7f48:0x0] j=rm.0 ef=0xf u=0:0 nid=192.168.202.16@tcp p=[0x200002341:0x7f45:0x0] foobar_161c lustre-MDT0000: clear the changelog for cl28 of all records unlink a file having nlink > 1, changelog record flags '0x0' lustre-MDT0000: clear the changelog for cl28 of all records lustre-MDT0000: Deregistered changelog user #28 PASS 161c (6s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 161d: create with concurrent .lustre/fid access ========================================================== 21:59:09 (1713491949) mdd.lustre-MDT0000.changelog_mask=+hsm Registered 1 changelog users: 'cl29' fail_loc=0x8000140c fail_val=5 PID TTY TIME CMD 18706 pts/0 00:00:00 bash fail_loc=0 lustre-MDT0000: clear the changelog for cl29 of all records lustre-MDT0000: Deregistered changelog user #29 PASS 161d (7s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 162a: path lookup sanity ================== 21:59:18 (1713491958) FID '0x200002341:0x7f4f:0x0' resolves to path 'd162a.sanity/d2/f162a.sanity' as expected FID '0x200002341:0x7f58:0x0' resolves to path 'd162a.sanity/d2/p/q/r/slink' as expected FID '0x200002341:0x7f59:0x0' resolves to path 'd162a.sanity/d2/p/q/r/slink.wrong' as expected FID '0x200002341:0x7f4f:0x0' resolves to path 'd162a.sanity/d2/a/b/c/new_file' as expected FID '0x200002341:0x7f4f:0x0' resolves to path '/mnt/lustre/d162a.sanity/d2/p/q/r/hlink' as expected FID '0x200002341:0x7f4f:0x0' resolves to path 'd162a.sanity/d2/a/b/c/new_file' as expected PASS 162a (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 162b: striped directory path lookup sanity ========================================================== 21:59:24 (1713491964) SKIP: sanity test_162b needs >= 2 MDTs SKIP 162b (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 162c: fid2path works with paths 100 or more directories deep ========================================================== 21:59:27 (1713491967) FID '0x200002341:0x7f5c:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0' as expected FID '0x200002341:0x7f5d:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0' as expected FID '0x200002341:0x7f5e:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1' as expected FID '0x200002341:0x7f5f:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1' as expected FID '0x200002341:0x7f60:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2' as expected FID '0x200002341:0x7f61:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2' as expected FID '0x200002341:0x7f62:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3' as expected FID '0x200002341:0x7f63:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3' as expected FID '0x200002341:0x7f64:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4' as expected FID '0x200002341:0x7f65:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4' as expected FID '0x200002341:0x7f66:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5' as expected FID '0x200002341:0x7f67:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5' as expected FID '0x200002341:0x7f68:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6' as expected FID '0x200002341:0x7f69:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6' as expected FID '0x200002341:0x7f6a:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7' as expected FID '0x200002341:0x7f6b:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7' as expected FID '0x200002341:0x7f6c:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8' as expected FID '0x200002341:0x7f6d:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8' as expected FID '0x200002341:0x7f6e:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9' as expected FID '0x200002341:0x7f6f:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9' as expected FID '0x200002341:0x7f70:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10' as expected FID '0x200002341:0x7f71:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10' as expected FID '0x200002341:0x7f72:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11' as expected FID '0x200002341:0x7f73:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11' as expected FID '0x200002341:0x7f74:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12' as expected FID '0x200002341:0x7f75:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12' as expected FID '0x200002341:0x7f76:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13' as expected FID '0x200002341:0x7f77:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13' as expected FID '0x200002341:0x7f78:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14' as expected FID '0x200002341:0x7f79:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14' as expected FID '0x200002341:0x7f7a:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15' as expected FID '0x200002341:0x7f7b:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15' as expected FID '0x200002341:0x7f7c:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16' as expected FID '0x200002341:0x7f7d:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16' as expected FID '0x200002341:0x7f7e:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17' as expected FID '0x200002341:0x7f7f:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17' as expected FID '0x200002341:0x7f80:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18' as expected FID '0x200002341:0x7f81:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18' as expected FID '0x200002341:0x7f82:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19' as expected FID '0x200002341:0x7f83:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19' as expected FID '0x200002341:0x7f84:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20' as expected FID '0x200002341:0x7f85:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20' as expected FID '0x200002341:0x7f86:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21' as expected FID '0x200002341:0x7f87:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21' as expected FID '0x200002341:0x7f88:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22' as expected FID '0x200002341:0x7f89:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22' as expected FID '0x200002341:0x7f8a:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23' as expected FID '0x200002341:0x7f8b:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23' as expected FID '0x200002341:0x7f8c:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24' as expected FID '0x200002341:0x7f8d:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24' as expected FID '0x200002341:0x7f8e:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25' as expected FID '0x200002341:0x7f8f:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25' as expected FID '0x200002341:0x7f90:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26' as expected FID '0x200002341:0x7f91:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26' as expected FID '0x200002341:0x7f92:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27' as expected FID '0x200002341:0x7f93:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27' as expected FID '0x200002341:0x7f94:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28' as expected FID '0x200002341:0x7f95:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28' as expected FID '0x200002341:0x7f96:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29' as expected FID '0x200002341:0x7f97:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29' as expected FID '0x200002341:0x7f98:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30' as expected FID '0x200002341:0x7f99:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30' as expected FID '0x200002341:0x7f9a:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31' as expected FID '0x200002341:0x7f9b:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31' as expected FID '0x200002341:0x7f9c:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32' as expected FID '0x200002341:0x7f9d:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32' as expected FID '0x200002341:0x7f9e:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33' as expected FID '0x200002341:0x7f9f:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33' as expected FID '0x200002341:0x7fa0:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34' as expected FID '0x200002341:0x7fa1:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34' as expected FID '0x200002341:0x7fa2:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35' as expected FID '0x200002341:0x7fa3:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35' as expected FID '0x200002341:0x7fa4:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36' as expected FID '0x200002341:0x7fa5:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36' as expected FID '0x200002341:0x7fa6:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37' as expected FID '0x200002341:0x7fa7:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37' as expected FID '0x200002341:0x7fa8:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38' as expected FID '0x200002341:0x7fa9:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38' as expected FID '0x200002341:0x7faa:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39' as expected FID '0x200002341:0x7fab:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39' as expected FID '0x200002341:0x7fac:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40' as expected FID '0x200002341:0x7fad:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40' as expected FID '0x200002341:0x7fae:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41' as expected FID '0x200002341:0x7faf:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41' as expected FID '0x200002341:0x7fb0:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42' as expected FID '0x200002341:0x7fb1:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42' as expected FID '0x200002341:0x7fb2:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43' as expected FID '0x200002341:0x7fb3:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43' as expected FID '0x200002341:0x7fb4:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44' as expected FID '0x200002341:0x7fb5:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44' as expected FID '0x200002341:0x7fb6:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45' as expected FID '0x200002341:0x7fb7:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45' as expected FID '0x200002341:0x7fb8:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46' as expected FID '0x200002341:0x7fb9:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46' as expected FID '0x200002341:0x7fba:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47' as expected FID '0x200002341:0x7fbb:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47' as expected FID '0x200002341:0x7fbc:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48' as expected FID '0x200002341:0x7fbd:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48' as expected FID '0x200002341:0x7fbe:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49' as expected FID '0x200002341:0x7fbf:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49' as expected FID '0x200002341:0x7fc0:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50' as expected FID '0x200002341:0x7fc1:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50' as expected FID '0x200002341:0x7fc2:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51' as expected FID '0x200002341:0x7fc3:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51' as expected FID '0x200002341:0x7fc4:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52' as expected FID '0x200002341:0x7fc5:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52' as expected FID '0x200002341:0x7fc6:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53' as expected FID '0x200002341:0x7fc7:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53' as expected FID '0x200002341:0x7fc8:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54' as expected FID '0x200002341:0x7fc9:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54' as expected FID '0x200002341:0x7fca:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55' as expected FID '0x200002341:0x7fcb:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55' as expected FID '0x200002341:0x7fcc:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56' as expected FID '0x200002341:0x7fcd:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56' as expected FID '0x200002341:0x7fce:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57' as expected FID '0x200002341:0x7fcf:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57' as expected FID '0x200002341:0x7fd0:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58' as expected FID '0x200002341:0x7fd1:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58' as expected FID '0x200002341:0x7fd2:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59' as expected FID '0x200002341:0x7fd3:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59' as expected FID '0x200002341:0x7fd4:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60' as expected FID '0x200002341:0x7fd5:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60' as expected FID '0x200002341:0x7fd6:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61' as expected FID '0x200002341:0x7fd7:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61' as expected FID '0x200002341:0x7fd8:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62' as expected FID '0x200002341:0x7fd9:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62' as expected FID '0x200002341:0x7fda:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63' as expected FID '0x200002341:0x7fdb:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63' as expected FID '0x200002341:0x7fdc:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64' as expected FID '0x200002341:0x7fdd:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64' as expected FID '0x200002341:0x7fde:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65' as expected FID '0x200002341:0x7fdf:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65' as expected FID '0x200002341:0x7fe0:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66' as expected FID '0x200002341:0x7fe1:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66' as expected FID '0x200002341:0x7fe2:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67' as expected FID '0x200002341:0x7fe3:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67' as expected FID '0x200002341:0x7fe4:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68' as expected FID '0x200002341:0x7fe5:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68' as expected FID '0x200002341:0x7fe6:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69' as expected FID '0x200002341:0x7fe7:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69' as expected FID '0x200002341:0x7fe8:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70' as expected FID '0x200002341:0x7fe9:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70' as expected FID '0x200002341:0x7fea:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71' as expected FID '0x200002341:0x7feb:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71' as expected FID '0x200002341:0x7fec:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72' as expected FID '0x200002341:0x7fed:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72' as expected FID '0x200002341:0x7fee:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73' as expected FID '0x200002341:0x7fef:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73' as expected FID '0x200002341:0x7ff0:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74' as expected FID '0x200002341:0x7ff1:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74' as expected FID '0x200002341:0x7ff2:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75' as expected FID '0x200002341:0x7ff3:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75' as expected FID '0x200002341:0x7ff4:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76' as expected FID '0x200002341:0x7ff5:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76' as expected FID '0x200002341:0x7ff6:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77' as expected FID '0x200002341:0x7ff7:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77' as expected FID '0x200002341:0x7ff8:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78' as expected FID '0x200002341:0x7ff9:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78' as expected FID '0x200002341:0x7ffa:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78/79' as expected FID '0x200002341:0x7ffb:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78/79' as expected FID '0x200002341:0x7ffc:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78/79/80' as expected FID '0x200002341:0x7ffd:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78/79/80' as expected FID '0x200002341:0x7ffe:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78/79/80/81' as expected FID '0x200002341:0x7fff:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78/79/80/81' as expected FID '0x200002341:0x8000:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78/79/80/81/82' as expected FID '0x200002341:0x8001:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78/79/80/81/82' as expected FID '0x200002341:0x8002:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78/79/80/81/82/83' as expected FID '0x200002341:0x8003:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78/79/80/81/82/83' as expected FID '0x200002341:0x8004:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78/79/80/81/82/83/84' as expected FID '0x200002341:0x8005:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78/79/80/81/82/83/84' as expected FID '0x200002341:0x8006:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78/79/80/81/82/83/84/85' as expected FID '0x200002341:0x8007:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78/79/80/81/82/83/84/85' as expected FID '0x200002341:0x8008:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78/79/80/81/82/83/84/85/86' as expected FID '0x200002341:0x8009:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78/79/80/81/82/83/84/85/86' as expected FID '0x200002341:0x800a:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78/79/80/81/82/83/84/85/86/87' as expected FID '0x200002341:0x800b:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78/79/80/81/82/83/84/85/86/87' as expected FID '0x200002341:0x800c:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78/79/80/81/82/83/84/85/86/87/88' as expected FID '0x200002341:0x800d:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78/79/80/81/82/83/84/85/86/87/88' as expected FID '0x200002341:0x800e:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78/79/80/81/82/83/84/85/86/87/88/89' as expected FID '0x200002341:0x800f:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78/79/80/81/82/83/84/85/86/87/88/89' as expected FID '0x200002341:0x8010:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78/79/80/81/82/83/84/85/86/87/88/89/90' as expected FID '0x200002341:0x8011:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78/79/80/81/82/83/84/85/86/87/88/89/90' as expected FID '0x200002341:0x8012:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78/79/80/81/82/83/84/85/86/87/88/89/90/91' as expected FID '0x200002341:0x8013:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78/79/80/81/82/83/84/85/86/87/88/89/90/91' as expected FID '0x200002341:0x8014:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78/79/80/81/82/83/84/85/86/87/88/89/90/91/92' as expected FID '0x200002341:0x8015:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78/79/80/81/82/83/84/85/86/87/88/89/90/91/92' as expected FID '0x200002341:0x8016:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78/79/80/81/82/83/84/85/86/87/88/89/90/91/92/93' as expected FID '0x200002341:0x8017:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78/79/80/81/82/83/84/85/86/87/88/89/90/91/92/93' as expected FID '0x200002341:0x8018:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78/79/80/81/82/83/84/85/86/87/88/89/90/91/92/93/94' as expected FID '0x200002341:0x8019:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78/79/80/81/82/83/84/85/86/87/88/89/90/91/92/93/94' as expected FID '0x200002341:0x801a:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78/79/80/81/82/83/84/85/86/87/88/89/90/91/92/93/94/95' as expected FID '0x200002341:0x801b:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78/79/80/81/82/83/84/85/86/87/88/89/90/91/92/93/94/95' as expected FID '0x200002341:0x801c:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78/79/80/81/82/83/84/85/86/87/88/89/90/91/92/93/94/95/96' as expected FID '0x200002341:0x801d:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78/79/80/81/82/83/84/85/86/87/88/89/90/91/92/93/94/95/96' as expected FID '0x200002341:0x801e:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78/79/80/81/82/83/84/85/86/87/88/89/90/91/92/93/94/95/96/97' as expected FID '0x200002341:0x801f:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78/79/80/81/82/83/84/85/86/87/88/89/90/91/92/93/94/95/96/97' as expected FID '0x200002341:0x8020:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78/79/80/81/82/83/84/85/86/87/88/89/90/91/92/93/94/95/96/97/98' as expected FID '0x200002341:0x8021:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78/79/80/81/82/83/84/85/86/87/88/89/90/91/92/93/94/95/96/97/98' as expected FID '0x200002341:0x8022:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78/79/80/81/82/83/84/85/86/87/88/89/90/91/92/93/94/95/96/97/98/99' as expected FID '0x200002341:0x8023:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78/79/80/81/82/83/84/85/86/87/88/89/90/91/92/93/94/95/96/97/98/99' as expected FID '0x200002341:0x8024:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78/79/80/81/82/83/84/85/86/87/88/89/90/91/92/93/94/95/96/97/98/99/100' as expected FID '0x200002341:0x8025:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78/79/80/81/82/83/84/85/86/87/88/89/90/91/92/93/94/95/96/97/98/99/100' as expected FID '0x200002341:0x8026:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78/79/80/81/82/83/84/85/86/87/88/89/90/91/92/93/94/95/96/97/98/99/100/101' as expected FID '0x200002341:0x8027:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78/79/80/81/82/83/84/85/86/87/88/89/90/91/92/93/94/95/96/97/98/99/100/101' as expected PASS 162c (10s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 165a: ofd access log discovery ============ 21:59:39 (1713491979) obdfilter.lustre-OST0000.access_log_size=4096 - name: lustre-OST0000 version: 0x10000 type: 0x1 log_size: 4096 entry_size: 64 Stopping /mnt/lustre-ost1 (opts:) on oleg216-server pdsh@oleg216-client: oleg216-server: ssh exited with exit code 1 Starting ost1: -o localrecov lustre-ost1/ost1 /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg216-server: oleg216-server.virtnet: executing set_default_debug all all pdsh@oleg216-client: oleg216-server: ssh exited with exit code 1 Started lustre-OST0000 PASS 165a (16s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 165b: ofd access log entries are produced and consumed ========================================================== 21:59:58 (1713491998) obdfilter.lustre-OST0000.access_log_size=4096 - name: lustre-OST0000 version: 0x10000 type: 0x1 log_size: 4096 entry_size: 64 entry = '- TRACE alr_log_entry lustre-OST0000 [0x200002341:0x8028:0x0] 0 1048576 1713492005 1048576 1 w' entry = '- TRACE alr_log_entry lustre-OST0000 [0x200002341:0x8028:0x0] 0 524288 1713492015 524288 1 r' pdsh@oleg216-client: oleg216-server: ssh exited with exit code 1 Stopping /mnt/lustre-ost1 (opts:) on oleg216-server Starting ost1: -o localrecov lustre-ost1/ost1 /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg216-server: oleg216-server.virtnet: executing set_default_debug all all pdsh@oleg216-client: oleg216-server: ssh exited with exit code 1 Started lustre-OST0000 PASS 165b (30s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 165c: full ofd access logs do not block IOs ========================================================== 22:00:30 (1713492030) obdfilter.lustre-OST0000.access_log_size=4096 - unlinked 0 (time 1713492040 ; total 0 ; last 0) total: 128 unlinks in 1 seconds: 128.000000 unlinks/second pdsh@oleg216-client: oleg216-server: ssh exited with exit code 1 Stopping /mnt/lustre-ost1 (opts:) on oleg216-server Starting ost1: -o localrecov lustre-ost1/ost1 /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg216-server: oleg216-server.virtnet: executing set_default_debug all all pdsh@oleg216-client: oleg216-server: ssh exited with exit code 1 Started lustre-OST0000 PASS 165c (19s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 165d: ofd_access_log mask works =========== 22:00:52 (1713492052) obdfilter.lustre-OST0000.access_log_size=4096 obdfilter.lustre-OST0000.access_log_mask=rw obdfilter.lustre-OST0000.access_log_mask=r obdfilter.lustre-OST0000.access_log_mask=w obdfilter.lustre-OST0000.access_log_mask=0 pdsh@oleg216-client: oleg216-server: ssh exited with exit code 1 Stopping /mnt/lustre-ost1 (opts:) on oleg216-server Starting ost1: -o localrecov lustre-ost1/ost1 /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg216-server: oleg216-server.virtnet: executing set_default_debug all all pdsh@oleg216-client: oleg216-server: ssh exited with exit code 1 Started lustre-OST0000 PASS 165d (30s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 165e: ofd_access_log MDT index filter works ========================================================== 22:01:24 (1713492084) SKIP: sanity test_165e needs >= 2 MDTs SKIP 165e (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 165f: ofd_access_log_reader --exit-on-close works ========================================================== 22:01:28 (1713492088) obdfilter.lustre-OST0000.access_log_size=4096 Stopping /mnt/lustre-ost1 (opts:) on oleg216-server pdsh@oleg216-client: oleg216-server: ssh exited with exit code 1 Starting ost1: -o localrecov lustre-ost1/ost1 /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg216-server: oleg216-server.virtnet: executing set_default_debug all all pdsh@oleg216-client: oleg216-server: ssh exited with exit code 1 Started lustre-OST0000 PASS 165f (15s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 169: parallel read and truncate should not deadlock ========================================================== 22:01:45 (1713492105) creating a 10 Mb file starting reads truncating the file 2560+0 records in 2560+0 records out 10485760 bytes (10 MB) copied, 0.304274 s, 34.5 MB/s killing dd wait until dd is finished removing the temporary file PASS 169 (18s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 170: test lctl df to handle corrupted log =============================================================================== 22:02:05 (1713492125) PASS 170 (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 171: test libcfs_debug_dumplog_thread stuck in do_exit() ================================================================ 22:02:11 (1713492131) fail_loc=0x50e fail_val=3000 multiop /mnt/lustre/f171.sanity vO_s TMPPIPE=/tmp/multiop_open_wait_pipe.6927 fail_loc=0 PASS 171 (5s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 172: manual device removal with lctl cleanup/detach ================================================================ 22:02:19 (1713492139) fail_loc=0x60e Starting client: oleg216-client.virtnet: -o user_xattr,flock oleg216-server@tcp:/lustre /mnt/lustre PASS 172 (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 180a: test obdecho on osc ================= 22:02:25 (1713492145) SKIP: sanity test_180a obdecho on osc is no longer supported SKIP 180a (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 180b: test obdecho directly on obdfilter == 22:02:28 (1713492148) oleg216-server: oleg216-server.virtnet: executing load_module obdecho/obdecho New object id is 0x2 valid: 0x1100000000007bf atime: 0 mtime: 0 ctime: 0 size: 0 blocks: 1 mode: 0107666 uid: 0 gid: 0 projid: 0 data_version: 0 Print status every operation test_brw: writing 10x64 pages (obj 0x2, off 0): Thu Apr 18 22:02:33 2024 test_brw: write number 1 @ 2:0 for 262144 test_brw: write number 2 @ 2:262144 for 262144 test_brw: write number 3 @ 2:524288 for 262144 test_brw: write number 4 @ 2:786432 for 262144 test_brw: write number 5 @ 2:1048576 for 262144 test_brw: write number 6 @ 2:1310720 for 262144 test_brw: write number 7 @ 2:1572864 for 262144 test_brw: write number 8 @ 2:1835008 for 262144 test_brw: write number 9 @ 2:2097152 for 262144 test_brw: write number 10 @ 2:2359296 for 262144 test_brw: wrote 10x64 pages in 0.016s (153.478 MB/s): Thu Apr 18 22:02:33 2024 destroy: 1 objects destroy: #1 is object id 0x2 PASS 180b (8s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 180c: test huge bulk I/O size on obdfilter, don't LASSERT ========================================================== 22:02:38 (1713492158) oleg216-server: oleg216-server.virtnet: executing load_module obdecho/obdecho New object id is 0x3 valid: 0x1100000000007bf atime: 0 mtime: 0 ctime: 0 size: 0 blocks: 1 mode: 0107666 uid: 0 gid: 0 projid: 0 data_version: 0 Print status every operation test_brw: writing 10x16384 pages (obj 0x3, off 0): Thu Apr 18 22:02:43 2024 test_brw: write number 1 @ 3:0 for 67108864 test_brw: write number 2 @ 3:67108864 for 67108864 test_brw: write number 3 @ 3:134217728 for 67108864 test_brw: write number 4 @ 3:201326592 for 67108864 test_brw: write number 5 @ 3:268435456 for 67108864 test_brw: write number 6 @ 3:335544320 for 67108864 test_brw: write number 7 @ 3:402653184 for 67108864 test_brw: write number 8 @ 3:469762048 for 67108864 test_brw: write number 9 @ 3:536870912 for 67108864 test_brw: write number 10 @ 3:603979776 for 67108864 test_brw: wrote 10x16384 pages in 1.312s (487.830 MB/s): Thu Apr 18 22:02:44 2024 destroy: 1 objects destroy: #1 is object id 0x3 PASS 180c (9s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 181: Test open-unlinked dir ================================================================================== 22:02:49 (1713492169) - open/close 2457 (time 1713492180.93 total 10.00 last 245.67) total: 4000 open/close in 16.17 seconds: 247.42 ops/second --------------e------- . multiop /mnt/lustre/d181.sanity vD_Sc TMPPIPE=/tmp/multiop_open_wait_pipe.6927 - unlinked 0 (time 1713492188 ; total 0 ; last 0) total: 4000 unlinks in 9 seconds: 444.444458 unlinks/second stat: cannot stat '/mnt/lustre/d181.sanity': No such file or directory PASS 181 (30s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 182a: Test parallel modify metadata operations from mdc ========================================================== 22:03:21 (1713492201) mdc.lustre-MDT0000-mdc-ffff8800add22800.rpc_stats=clear total: 1000 open/close in 2.63 seconds: 379.54 ops/second total: 1000 open/close in 2.64 seconds: 378.71 ops/second total: 1000 open/close in 2.61 seconds: 383.07 ops/second total: 1000 open/close in 2.65 seconds: 377.19 ops/second total: 1000 open/close in 2.72 seconds: 367.26 ops/second total: 1000 open/close in 2.68 seconds: 372.74 ops/second total: 1000 open/close in 2.68 seconds: 372.96 ops/second total: 1000 open/close in 2.72 seconds: 367.23 ops/second total: 1000 open/close in 2.76 seconds: 362.70 ops/second total: 1000 open/close in 2.78 seconds: 359.50 ops/second - unlinked 0 (time 1713492207 ; total 0 ; last 0) total: 1000 unlinks in 2 seconds: 500.000000 unlinks/second - unlinked 0 (time 1713492207 ; total 0 ; last 0) total: 1000 unlinks in 2 seconds: 500.000000 unlinks/second - unlinked 0 (time 1713492207 ; total 0 ; last 0) total: 1000 unlinks in 2 seconds: 500.000000 unlinks/second - unlinked 0 (time 1713492207 ; total 0 ; last 0) total: 1000 unlinks in 2 seconds: 500.000000 unlinks/second - unlinked 0 (time 1713492207 ; total 0 ; last 0) total: 1000 unlinks in 2 seconds: 500.000000 unlinks/second - unlinked 0 (time 1713492207 ; total 0 ; last 0) total: 1000 unlinks in 2 seconds: 500.000000 unlinks/second - unlinked 0 (time 1713492207 ; total 0 ; last 0) total: 1000 unlinks in 2 seconds: 500.000000 unlinks/second - unlinked 0 (time 1713492207 ; total 0 ; last 0) total: 1000 unlinks in 2 seconds: 500.000000 unlinks/second - unlinked 0 (time 1713492207 ; total 0 ; last 0) total: 1000 unlinks in 2 seconds: 500.000000 unlinks/second - unlinked 0 (time 1713492207 ; total 0 ; last 0) total: 1000 unlinks in 2 seconds: 500.000000 unlinks/second mdc.lustre-MDT0000-mdc-ffff8800add22800.rpc_stats= snapshot_time: 1713492210.211047659 secs.nsecs start_time: 1713492202.459294973 secs.nsecs elapsed_time: 7.751752686 secs.nsecs modify_RPCs_in_flight: 0 modify rpcs in flight rpcs %% cum %% 0: 0 0 0 1: 75 0 0 2: 200 0 0 3: 206 0 1 4: 229 0 2 5: 445 1 3 6: 420 1 5 7: 28040 93 98 8: 395 1 100 read RPCs in flight: 0 write RPCs in flight: 0 pending write pages: 0 pending read pages: 0 read write pages per rpc rpcs % cum % | rpcs % cum % 1: 0 0 0 | 0 0 0 read write rpcs in flight rpcs % cum % | rpcs % cum % 1: 0 0 0 | 0 0 0 read write offset rpcs % cum % | rpcs % cum % 0: 0 0 0 | 0 0 0 PASS 182a (10s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 182b: Test parallel modify metadata operations from osp ========================================================== 22:03:34 (1713492214) SKIP: sanity test_182b needs >= 2 MDTs SKIP 182b (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 183: No crash or request leak in case of strange dispositions ================================================================== 22:03:37 (1713492217) fail_loc=0x148 ls: cannot open directory /mnt/lustre/d183.sanity: No such file or directory cat: /mnt/lustre/d183.sanity/f183.sanity: No such file or directory fail_loc=0 touch: cannot touch '/mnt/lustre/d183.sanity/f183.sanity': No such file or directory PASS 183 (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 184a: Basic layout swap =================== 22:03:43 (1713492223) PASS 184a (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 184b: Forbidden layout swap (will generate errors) ========================================================== 22:03:49 (1713492229) lfs swap_layouts: error: cannot open '/mnt/lustre/d184b.sanity/184b/d1' for write: Is a directory (21) lfs swap_layouts: error: cannot open '/mnt/lustre/d184b.sanity/184b/d1' for write: Is a directory (21) running as uid/gid/euid/egid 500/500/500/500, groups: [/home/green/git/lustre-release/lustre/utils/lfs] [swap_layouts] [/mnt/lustre/d184b.sanity/184b/f1] [/mnt/lustre/d184b.sanity/184b/f2] lfs swap_layouts: error: cannot open '/mnt/lustre/d184b.sanity/184b/f1' for write: Permission denied (13) lfs swap_layouts: error: cannot swap layout between '/mnt/lustre/d184b.sanity/184b/f1' and '/mnt/lustre/d184b.sanity/184b/f3': Operation not permitted (1) PASS 184b (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 184c: Concurrent write and layout swap ==== 22:03:54 (1713492234) 27+0 records in 27+0 records out 28311552 bytes (28 MB) copied, 0.910792 s, 31.1 MB/s 31+0 records in 31+0 records out 32505856 bytes (33 MB) copied, 1.21118 s, 26.8 MB/s ref file size: ref1(28311552), ref2(32505856) 1728+0 records in 1728+0 records out 28311552 bytes (28 MB) copied, 1.92114 s, 14.7 MB/s Copied 1982464 bytes before swapping layout... PASS 184c (10s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 184d: allow stripeless layouts swap ======= 22:04:06 (1713492246) Succeed in opening file "/mnt/lustre/d184d.sanity/f184d.sanity-2"(flags=O_CREAT) Succeed in opening file "/mnt/lustre/d184d.sanity/f184d.sanity-3"(flags=O_CREAT) -c 1 -S 4194304 -L raid0 -i 1 -c 1 -S 4194304 -L raid0 -i 1 /mnt/lustre/d184d.sanity/f184d.sanity-1: trusted.lov: No such attribute PASS 184d (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 184e: Recreate layout after stripeless layout swaps ========================================================== 22:04:13 (1713492253) Succeed in opening file "/mnt/lustre/d184e.sanity/f184e.sanity-2"(flags=O_CREAT) Succeed in opening file "/mnt/lustre/d184e.sanity/f184e.sanity-3"(flags=O_CREAT) /mnt/lustre/d184e.sanity/f184e.sanity-1: trusted.lov: No such attribute PASS 184e (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 184f: IOC_MDC_GETFILEINFO for files with long names but no striping ========================================================== 22:04:19 (1713492259) error: bad stripe_count '0x6666' PASS 184f (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 185: Volatile file support ================ 22:04:24 (1713492264) Can't lstat /mnt/lustre/.lustre/fid/[0x200002342:0x36dd:0x0]: No such file or directory multiop /mnt/lustre/d185.sanity vVw4096_c TMPPIPE=/tmp/multiop_open_wait_pipe.6927 /mnt/lustre/.lustre/fid/[0x200002342:0x36de:0x0] has type file OK PASS 185 (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 185a: Volatile file creation in .lustre/fid/ ========================================================== 22:04:30 (1713492270) /mnt/lustre/.lustre/fid/[0x200002342:0x36df:0x0] has type file OK Can't lstat /mnt/lustre/.lustre/fid/[0x200002342:0x36df:0x0]: No such file or directory PASS 185a (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 187a: Test data version change ============ 22:04:37 (1713492277) 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 0.472541 s, 22.2 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.104743 s, 10.0 MB/s PASS 187a (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 187b: Test data version change on volatile file ========================================================== 22:04:42 (1713492282) PASS 187b (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 200: OST pools ============================ 22:04:46 (1713492286) Creating new pool oleg216-server: Pool lustre.cea1 created Adding targets to pool oleg216-server: OST lustre-OST0000_UUID added to pool lustre.cea1 Waiting 90s for 'lustre-OST0000_UUID ' Setting pool on directory /mnt/lustre/d200.pools/dir_tst Checking pool on directory /mnt/lustre/d200.pools/dir_tst Checking pool on directory /mnt/lustre/d200.pools/dir_tst/subdir Testing relative path works well Setting pool on directory dir_tst Setting pool on directory ./dir_tst Setting pool on directory ../dir_tst Setting pool on directory ../dir_tst/dir_tst Checking files allocation from directory pool Creating files in pool Checking 'lfs df' output Creating files in a pool with relative pathname Removing first target from a pool Removing lustre-OST0000_UUID from cea1 oleg216-server: OST lustre-OST0000_UUID removed from pool lustre.cea1 pdsh@oleg216-client: oleg216-server: ssh exited with exit code 1 Removing all targets from pool Destroying pool oleg216-server: Pool lustre.cea1 destroyed PASS 200 (16s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 204a: Print default stripe attributes ===== 22:05:04 (1713492304) PASS 204a (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 204b: Print default stripe size and offset ========================================================== 22:05:09 (1713492309) PASS 204b (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 204c: Print default stripe count and offset ========================================================== 22:05:14 (1713492314) PASS 204c (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 204d: Print default stripe count and size ========================================================== 22:05:18 (1713492318) PASS 204d (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 204e: Print raw stripe attributes ========= 22:05:23 (1713492323) PASS 204e (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 204f: Print raw stripe size and offset ==== 22:05:28 (1713492328) PASS 204f (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 204g: Print raw stripe count and offset === 22:05:33 (1713492333) PASS 204g (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 204h: Print raw stripe count and size ===== 22:05:38 (1713492338) PASS 204h (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 205a: Verify job stats ==================== 22:05:43 (1713492343) Setting lustre.sys.jobid_var from procname_uid to nodelocal Waiting 90s for 'nodelocal' Updated after 2s: want 'nodelocal' got 'nodelocal' mdd.lustre-MDT0000.changelog_mask=+hsm Registered 1 changelog users: 'cl30' mdt.lustre-MDT0000.job_cleanup_interval=5 jobid_name=id.205a.%e.5500 Test: /home/green/git/lustre-release/lustre/utils/lfs mkdir -i 0 -c 1 /mnt/lustre/d205a.sanity Using JobID environment nodelocal=id.205a.lfs.5500 jobid_name=id.205a.%e.8302 Test: rmdir /mnt/lustre/d205a.sanity Using JobID environment nodelocal=id.205a.rmdir.8302 jobid_name=id.205a.%e.17502 Test: mknod /mnt/lustre/f205a.sanity c 1 3 Using JobID environment nodelocal=id.205a.mknod.17502 jobid_name=id.205a.%e.8448 Test: rm -f /mnt/lustre/f205a.sanity Using JobID environment nodelocal=id.205a.rm.8448 jobid_name=id.205a.%e.17774 Test: /home/green/git/lustre-release/lustre/utils/lfs setstripe -i 0 -c 1 /mnt/lustre/f205a.sanity Using JobID environment nodelocal=id.205a.lfs.17774 jobid_name=id.205a.%e.19715 Test: touch /mnt/lustre/f205a.sanity Using JobID environment nodelocal=id.205a.touch.19715 jobid_name=id.205a.%e.15197 Test: dd if=/dev/zero of=/mnt/lustre/f205a.sanity bs=1M count=1 oflag=sync Using JobID environment nodelocal=id.205a.dd.15197 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.134773 s, 7.8 MB/s jobid_name=id.205a.%e.740 Test: dd if=/mnt/lustre/f205a.sanity of=/dev/null bs=1M count=1 iflag=direct Using JobID environment nodelocal=id.205a.dd.740 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0423277 s, 24.8 MB/s jobid_name=id.205a.%e.23174 Test: /home/green/git/lustre-release/lustre/tests/truncate /mnt/lustre/f205a.sanity 0 Using JobID environment nodelocal=id.205a.truncate.23174 jobid_name=id.205a.%e.5093 Test: mv -f /mnt/lustre/f205a.sanity /mnt/lustre/d205a.sanity.rename Using JobID environment nodelocal=id.205a.mv.5093 jobid_name=id.205a.%e.17043 Test: /home/green/git/lustre-release/lustre/utils/lfs mkdir -i 0 -c 1 /mnt/lustre/d205a.sanity.expire Using JobID environment nodelocal=id.205a.lfs.17043 lustre-MDT0000.500097713 01CREAT 02:05:56.010160964 2024.04.19 0x0 t=[0x200002342:0x3705:0x0] j=id.205a.lfs.17774 ef=0xf u=0:0 nid=192.168.202.16@tcp p=[0x200000007:0x1:0x0] f205a.sanity lustre-MDT0000.500097714 12LYOUT 02:05:56.019587678 2024.04.19 0x0 t=[0x200002342:0x3705:0x0] j=id.205a.lfs.17774 ef=0xf u=0:0 nid=192.168.202.16@tcp p=[0x200000007:0x1:0x0] lustre-MDT0000.500097715 11CLOSE 02:05:56.024267920 2024.04.19 0x2 t=[0x200002342:0x3705:0x0] j=id.205a.lfs.17774 ef=0xf u=0:0 nid=192.168.202.16@tcp lustre-MDT0000.500097716 11CLOSE 02:05:56.030718542 2024.04.19 0x42 t=[0x200002342:0x3705:0x0] j=id.205a.lfs.17774 ef=0xf u=0:0 nid=192.168.202.16@tcp lustre-MDT0000.500097717 11CLOSE 02:05:57.593621617 2024.04.19 0x42 t=[0x200002342:0x3705:0x0] j=id.205a.touch.19715 ef=0xf u=0:0 nid=192.168.202.16@tcp lustre-MDT0000.500097718 13TRUNC 02:05:59.504223936 2024.04.19 0xe t=[0x200002342:0x3705:0x0] j=id.205a.dd.15197 ef=0xf u=0:0 nid=192.168.202.16@tcp p=[0x200000007:0x1:0x0] lustre-MDT0000.500097719 11CLOSE 02:05:59.645368991 2024.04.19 0x242 t=[0x200002342:0x3705:0x0] j=id.205a.dd.15197 ef=0xf u=0:0 nid=192.168.202.16@tcp lustre-MDT0000.500097720 13TRUNC 02:06:02.823304645 2024.04.19 0xe t=[0x200002342:0x3705:0x0] j=id.205a.truncate.23174 ef=0xf u=0:0 nid=192.168.202.16@tcp p=[0x200000007:0x1:0x0] lustre-MDT0000.500097721 08RENME 02:06:04.745268555 2024.04.19 0x0 t=[0:0x0:0x0] j=id.205a.mv.5093 ef=0xf u=0:0 nid=192.168.202.16@tcp p=[0x200000007:0x1:0x0] d205a.sanity.rename s=[0x200002342:0x3705:0x0] sp=[0x200000007:0x1:0x0] f205a.sanity lustre-MDT0000.500097722 02MKDIR 02:06:06.319189154 2024.04.19 0x0 t=[0x200002342:0x3708:0x0] j=id.205a.lfs.17043 ef=0xf u=0:0 nid=192.168.202.16@tcp p=[0x200000007:0x1:0x0] d205a.sanity.expire Setting lustre.sys.jobid_var from nodelocal to disable Waiting 90s for 'disable' Updated after 2s: want 'disable' got 'disable' lustre-MDT0000.500097711 05MKNOD 02:05:52.871899870 2024.04.19 0x0 t=[0x200002342:0x3704:0x0] j=id.205a.mknod.17502 ef=0xf u=0:0 nid=192.168.202.16@tcp p=[0x200000007:0x1:0x0] f205a.sanity lustre-MDT0000.500097712 06UNLNK 02:05:54.447386960 2024.04.19 0x1 t=[0x200002342:0x3704:0x0] j=id.205a.rm.8448 ef=0xf u=0:0 nid=192.168.202.16@tcp p=[0x200000007:0x1:0x0] f205a.sanity lustre-MDT0000.500097713 01CREAT 02:05:56.010160964 2024.04.19 0x0 t=[0x200002342:0x3705:0x0] j=id.205a.lfs.17774 ef=0xf u=0:0 nid=192.168.202.16@tcp p=[0x200000007:0x1:0x0] f205a.sanity lustre-MDT0000.500097721 08RENME 02:06:04.745268555 2024.04.19 0x0 t=[0:0x0:0x0] j=id.205a.mv.5093 ef=0xf u=0:0 nid=192.168.202.16@tcp p=[0x200000007:0x1:0x0] d205a.sanity.rename s=[0x200002342:0x3705:0x0] sp=[0x200000007:0x1:0x0] f205a.sanity lustre-MDT0000.500097723 01CREAT 02:06:09.692009429 2024.04.19 0x0 t=[0x200002342:0x3709:0x0] ef=0xf u=0:0 nid=192.168.202.16@tcp p=[0x200000007:0x1:0x0] f205a.sanity jobid_var=USER jobid_name=S.%j.%e.%u.%h.E Test: touch /mnt/lustre/f205a.sanity Using JobID environment USER=S.root.touch.0.oleg216-client.v jobid_var=USER jobid_name=S.%j.%e.%u.%H.E Test: touch /mnt/lustre/f205a.sanity Using JobID environment USER=S.root.touch.0.oleg216-client.E jobid_var=session jobid_name=S.%j.%e.%u.%h.E jobid_this_session=root Test: touch /mnt/lustre/f205a.sanity Using JobID environment session=S.root.touch.0.oleg216-client.v mdt.lustre-MDT0000.job_cleanup_interval=600 jobid_name=%e.%u lustre-MDT0000: clear the changelog for cl30 of all records lustre-MDT0000: Deregistered changelog user #30 Setting lustre.sys.jobid_var from session to procname_uid Waiting 90s for 'procname_uid' Updated after 2s: want 'procname_uid' got 'procname_uid' PASS 205a (37s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 205b: Verify job stats jobid and output format ========================================================== 22:06:22 (1713492382) mdt.lustre-MDT0000.job_stats=clear jobid_var=USER jobid_name=%j.%e.%u open: { samples: 1, unit: usecs, min: 1310, max: 1310, sum: 1310, sumsq: 1716100 } jobid_var=TEST205b mdt.lustre-MDT0000.job_stats="has\x20sp.touch.0" jobid_name=%e.%u jobid_var=procname_uid PASS 205b (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 205c: Verify client stats format ========== 22:06:26 (1713492386) llite.lustre-ffff8800add22800.stats=0 1+0 records in 1+0 records out 4096 bytes (4.1 kB) copied, 0.00389865 s, 1.1 MB/s llite.lustre-ffff8800add22800.stats= snapshot_time 1713492386.651041189 secs.nsecs start_time 1713492386.637910283 secs.nsecs elapsed_time 0.013130906 secs.nsecs write_bytes 1 samples [bytes] 4096 4096 4096 16777216 write 1 samples [usecs] 2083 2083 2083 4338889 open 1 samples [usecs] 36 36 36 1296 close 1 samples [usecs] 1578 1578 1578 2490084 mknod 1 samples [usecs] 3778 3778 3778 14273284 inode_permission 3 samples [usecs] 2 142 222 26252 opencount 1 samples [reqs] 1 1 1 1 write_bytes 1 samples [bytes] 4096 4096 4096 16777216 PASS 205c (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 205d: verify the format of some stats files ========================================================== 22:06:29 (1713492389) 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 0.278761 s, 37.6 MB/s rename_stats: - snapshot_time: 1713492390.912668150 - start_time: 1713491072.067122153 - elapsed_time: 1318.845545997 - same_dir: 512bytes: { sample: 4, pct: 0, cum_pct: 0 } 1KB: { sample: 0, pct: 0, cum_pct: 0 } 2KB: { sample: 0, pct: 0, cum_pct: 0 } 4KB: { sample: 0, pct: 0, cum_pct: 0 } 8KB: { sample: 0, pct: 0, cum_pct: 0 } 16KB: { sample: 0, pct: 0, cum_pct: 0 } 32KB: { sample: 0, pct: 0, cum_pct: 0 } 64KB: { sample: 0, pct: 0, cum_pct: 0 } 128KB: { sample: 0, pct: 0, cum_pct: 0 } 256KB: { sample: 0, pct: 0, cum_pct: 0 } 512KB: { sample: 1, pct: 0, cum_pct: 0 } 1MB: { sample: 57, pct: 0, cum_pct: 0 } 2MB: { sample: 29943, pct: 99, cum_pct: 100 } - crossdir_src: 512bytes: { sample: 3, pct: 100, cum_pct: 100 } - crossdir_tgt: 512bytes: { sample: 3, pct: 100, cum_pct: 100 } verify rename_stats... OK verify mdt job_stats... OK verify ost job_stats... OK PASS 205d (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 205e: verify the output of lljobstat ====== 22:06:35 (1713492395) jobid_var=nodelocal jobid_name=205e.%e.%u 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 0.663509 s, 15.8 MB/s mdt.lustre-MDT0000.job_stats= job_stats: - job_id: .touch.0 snapshot_time: 1713492383.335445673 secs.nsecs start_time: 1713492383.324353143 secs.nsecs elapsed_time: 0.011092530 secs.nsecs open: { samples: 1, unit: usecs, min: 1310, max: 1310, sum: 1310, sumsq: 1716100 } close: { samples: 1, unit: usecs, min: 263, max: 263, sum: 263, sumsq: 69169 } mknod: { samples: 1, unit: usecs, min: 1072, max: 1072, sum: 1072, sumsq: 1149184 } link: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } unlink: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } mkdir: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } rmdir: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } rename: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getattr: { samples: 1, unit: usecs, min: 104, max: 104, sum: 104, sumsq: 10816 } setattr: { samples: 1, unit: usecs, min: 294, max: 294, sum: 294, sumsq: 86436 } getxattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setxattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } samedir_rename: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } parallel_rename_file: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } parallel_rename_dir: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } crossdir_rename: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } read: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } write: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } read_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } write_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } punch: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } migrate: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } fallocate: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: root.lfs.0 snapshot_time: 1713492383.765286744 secs.nsecs start_time: 1713492383.758302317 secs.nsecs elapsed_time: 0.006984427 secs.nsecs open: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } close: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } mknod: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } link: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } unlink: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } mkdir: { samples: 1, unit: usecs, min: 1242, max: 1242, sum: 1242, sumsq: 1542564 } rmdir: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } rename: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getattr: { samples: 1, unit: usecs, min: 83, max: 83, sum: 83, sumsq: 6889 } setattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getxattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setxattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } samedir_rename: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } parallel_rename_file: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } parallel_rename_dir: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } crossdir_rename: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } read: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } write: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } read_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } write_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } punch: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } migrate: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } fallocate: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: dd.0 snapshot_time: 1713492390.567903730 secs.nsecs start_time: 1713492387.144496301 secs.nsecs elapsed_time: 3.423407429 secs.nsecs open: { samples: 2, unit: usecs, min: 615, max: 1360, sum: 1975, sumsq: 2227825 } close: { samples: 2, unit: usecs, min: 269, max: 858, sum: 1127, sumsq: 808525 } mknod: { samples: 1, unit: usecs, min: 1080, max: 1080, sum: 1080, sumsq: 1166400 } link: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } unlink: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } mkdir: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } rmdir: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } rename: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setattr: { samples: 1, unit: usecs, min: 2142, max: 2142, sum: 2142, sumsq: 4588164 } getxattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setxattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } samedir_rename: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } parallel_rename_file: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } parallel_rename_dir: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } crossdir_rename: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } read: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } write: { samples: 1, unit: usecs, min: 10426, max: 10426, sum: 10426, sumsq: 108701476 } read_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } write_bytes: { samples: 1, unit: bytes, min: 1048576, max: 1048576, sum: 1048576, sumsq: 1099511627776, hist: { 1M: 1 } } punch: { samples: 1, unit: usecs, min: 314, max: 314, sum: 314, sumsq: 98596 } migrate: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } fallocate: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: mkdir.0 snapshot_time: 1713492390.247586223 secs.nsecs start_time: 1713492390.244284700 secs.nsecs elapsed_time: 0.003301523 secs.nsecs open: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } close: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } mknod: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } link: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } unlink: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } mkdir: { samples: 1, unit: usecs, min: 1222, max: 1222, sum: 1222, sumsq: 1493284 } rmdir: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } rename: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getattr: { samples: 1, unit: usecs, min: 94, max: 94, sum: 94, sumsq: 8836 } setattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getxattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setxattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 1, unit: usecs, min: 30, max: 30, sum: 30, sumsq: 900 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } samedir_rename: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } parallel_rename_file: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } parallel_rename_dir: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } crossdir_rename: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } read: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } write: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } read_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } write_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } punch: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } migrate: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } fallocate: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: lfs.0 snapshot_time: 1713492390.274289468 secs.nsecs start_time: 1713492390.252058599 secs.nsecs elapsed_time: 0.022230869 secs.nsecs open: { samples: 2, unit: usecs, min: 821, max: 2504, sum: 3325, sumsq: 6944057 } close: { samples: 2, unit: usecs, min: 176, max: 197, sum: 373, sumsq: 69785 } mknod: { samples: 1, unit: usecs, min: 607, max: 607, sum: 607, sumsq: 368449 } link: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } unlink: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } mkdir: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } rmdir: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } rename: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getattr: { samples: 3, unit: usecs, min: 43, max: 53, sum: 146, sumsq: 7158 } setattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getxattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setxattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } samedir_rename: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } parallel_rename_file: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } parallel_rename_dir: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } crossdir_rename: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } read: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } write: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } read_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } write_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } punch: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } migrate: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } fallocate: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: mv.0 snapshot_time: 1713492390.593174464 secs.nsecs start_time: 1713492390.580663020 secs.nsecs elapsed_time: 0.012511444 secs.nsecs open: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } close: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } mknod: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } link: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } unlink: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } mkdir: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } rmdir: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } rename: { samples: 1, unit: usecs, min: 1460, max: 1460, sum: 1460, sumsq: 2131600 } getattr: { samples: 1, unit: usecs, min: 158, max: 158, sum: 158, sumsq: 24964 } setattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getxattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setxattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } samedir_rename: { samples: 1, unit: usecs, min: 1460, max: 1460, sum: 1460, sumsq: 2131600 } parallel_rename_file: { samples: 1, unit: usecs, min: 1460, max: 1460, sum: 1460, sumsq: 2131600 } parallel_rename_dir: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } crossdir_rename: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } read: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } write: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } read_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } write_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } punch: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } migrate: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } fallocate: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: rm.0 snapshot_time: 1713492394.332352751 secs.nsecs start_time: 1713492394.272246360 secs.nsecs elapsed_time: 0.060106391 secs.nsecs open: { samples: 2, unit: usecs, min: 212, max: 224, sum: 436, sumsq: 95120 } close: { samples: 2, unit: usecs, min: 175, max: 214, sum: 389, sumsq: 76421 } mknod: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } link: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } unlink: { samples: 1, unit: usecs, min: 1398, max: 1398, sum: 1398, sumsq: 1954404 } mkdir: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } rmdir: { samples: 1, unit: usecs, min: 917, max: 917, sum: 917, sumsq: 840889 } rename: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getattr: { samples: 2, unit: usecs, min: 56, max: 86, sum: 142, sumsq: 10532 } setattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getxattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setxattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } samedir_rename: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } parallel_rename_file: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } parallel_rename_dir: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } crossdir_rename: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } read: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } write: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } read_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } write_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } punch: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } migrate: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } fallocate: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: 205e.lfs.0 snapshot_time: 1713492396.687228405 secs.nsecs start_time: 1713492396.652987711 secs.nsecs elapsed_time: 0.034240694 secs.nsecs open: { samples: 2, unit: usecs, min: 863, max: 2554, sum: 3417, sumsq: 7267685 } close: { samples: 2, unit: usecs, min: 147, max: 445, sum: 592, sumsq: 219634 } mknod: { samples: 1, unit: usecs, min: 648, max: 648, sum: 648, sumsq: 419904 } link: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } unlink: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } mkdir: { samples: 1, unit: usecs, min: 1105, max: 1105, sum: 1105, sumsq: 1221025 } rmdir: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } rename: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getattr: { samples: 4, unit: usecs, min: 42, max: 84, sum: 228, sumsq: 14024 } setattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getxattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setxattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } samedir_rename: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } parallel_rename_file: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } parallel_rename_dir: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } crossdir_rename: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } read: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } write: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } read_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } write_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } punch: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } migrate: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } fallocate: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: 205e.dd.0 snapshot_time: 1713492397.362072895 secs.nsecs start_time: 1713492396.691474622 secs.nsecs elapsed_time: 0.670598273 secs.nsecs open: { samples: 1, unit: usecs, min: 551, max: 551, sum: 551, sumsq: 303601 } close: { samples: 1, unit: usecs, min: 434, max: 434, sum: 434, sumsq: 188356 } mknod: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } link: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } unlink: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } mkdir: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } rmdir: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } rename: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setattr: { samples: 1, unit: usecs, min: 487, max: 487, sum: 487, sumsq: 237169 } getxattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setxattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } sync: { samples: 1, unit: usecs, min: 8100, max: 8100, sum: 8100, sumsq: 65610000 } samedir_rename: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } parallel_rename_file: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } parallel_rename_dir: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } crossdir_rename: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } read: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } write: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } read_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } write_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } punch: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } migrate: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } fallocate: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } obdfilter.lustre-OST0000.job_stats= job_stats: - job_id: touch.0 snapshot_time: 1713492300.661499232 secs.nsecs start_time: 1713492127.135849251 secs.nsecs elapsed_time: 173.525649981 secs.nsecs read_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } write_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } read: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } write: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setattr: { samples: 10, unit: usecs, min: 206, max: 28828, sum: 31348, sumsq: 831808842 } punch: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } destroy: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } create: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } get_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } set_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } quotactl: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } prealloc: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: cp.0 snapshot_time: 1713492239.849371081 secs.nsecs start_time: 1713492224.502879396 secs.nsecs elapsed_time: 15.346491685 secs.nsecs read_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } write_bytes: { samples: 1, unit: bytes, min: 771, max: 771, sum: 771, sumsq: 594441, hist: { 1K: 1 } } read: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } write: { samples: 1, unit: usecs, min: 201, max: 201, sum: 201, sumsq: 40401 } getattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } punch: { samples: 1, unit: usecs, min: 418, max: 418, sum: 418, sumsq: 174724 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } destroy: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } create: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } get_info: { samples: 1, unit: usecs, min: 46, max: 46, sum: 46, sumsq: 2116 } set_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } quotactl: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } prealloc: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: lfs.0 snapshot_time: 1713492292.883829986 secs.nsecs start_time: 1713492224.631867953 secs.nsecs elapsed_time: 68.251962033 secs.nsecs read_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } write_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } read: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } write: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setattr: { samples: 3, unit: usecs, min: 210, max: 240, sum: 663, sumsq: 147069 } punch: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } destroy: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } create: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 1, unit: usecs, min: 26, max: 26, sum: 26, sumsq: 676 } get_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } set_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } quotactl: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } prealloc: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: cmp.0 snapshot_time: 1713492242.775076616 secs.nsecs start_time: 1713492224.716355905 secs.nsecs elapsed_time: 18.058720711 secs.nsecs read_bytes: { samples: 3, unit: bytes, min: 4096, max: 1048576, sum: 1986560, sumsq: 1971675201536, hist: { 4K: 1, 1M: 2 } } write_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } read: { samples: 3, unit: usecs, min: 113, max: 4793, sum: 8692, sumsq: 37319414 } write: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } punch: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } destroy: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } create: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } get_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } set_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } quotactl: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } prealloc: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: bash.0 snapshot_time: 1713492255.827813239 secs.nsecs start_time: 1713492224.754148474 secs.nsecs elapsed_time: 31.073664765 secs.nsecs read_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } write_bytes: { samples: 1, unit: bytes, min: 4, max: 4, sum: 4, sumsq: 16, hist: { 4: 1 } } read: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } write: { samples: 1, unit: usecs, min: 145, max: 145, sum: 145, sumsq: 21025 } getattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } punch: { samples: 1, unit: usecs, min: 389, max: 389, sum: 389, sumsq: 151321 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } destroy: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } create: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } get_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } set_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } quotactl: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } prealloc: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: cat.0 snapshot_time: 1713492225.286240269 secs.nsecs start_time: 1713492225.286212839 secs.nsecs elapsed_time: 0.000027430 secs.nsecs read_bytes: { samples: 1, unit: bytes, min: 4096, max: 4096, sum: 4096, sumsq: 16777216, hist: { 4K: 1 } } write_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } read: { samples: 1, unit: usecs, min: 143, max: 143, sum: 143, sumsq: 20449 } write: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } punch: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } destroy: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } create: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } get_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } set_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } quotactl: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } prealloc: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: chown.0 snapshot_time: 1713492230.640460629 secs.nsecs start_time: 1713492230.640444279 secs.nsecs elapsed_time: 0.000016350 secs.nsecs read_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } write_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } read: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } write: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setattr: { samples: 1, unit: usecs, min: 390, max: 390, sum: 390, sumsq: 152100 } punch: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } destroy: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } create: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } get_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } set_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } quotactl: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } prealloc: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: dd.0 snapshot_time: 1713492240.775004213 secs.nsecs start_time: 1713492238.641270981 secs.nsecs elapsed_time: 2.133733232 secs.nsecs read_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } write_bytes: { samples: 33, unit: bytes, min: 933888, max: 1048576, sum: 34488320, sumsq: 36056518885376, hist: { 1M: 33 } } read: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } write: { samples: 33, unit: usecs, min: 2933, max: 5855, sum: 143516, sumsq: 637121658 } getattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } punch: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } destroy: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } create: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } get_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } set_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } quotactl: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } prealloc: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: multiop.0 snapshot_time: 1713492283.662051337 secs.nsecs start_time: 1713492283.611195054 secs.nsecs elapsed_time: 0.050856283 secs.nsecs read_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } write_bytes: { samples: 2, unit: bytes, min: 1000, max: 2000, sum: 3000, sumsq: 5000000, hist: { 1K: 1, 2K: 1 } } read: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } write: { samples: 2, unit: usecs, min: 55, max: 76, sum: 131, sumsq: 8801 } getattr: { samples: 2, unit: usecs, min: 14, max: 35, sum: 49, sumsq: 1421 } setattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } punch: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } sync: { samples: 2, unit: usecs, min: 13214, max: 13975, sum: 27189, sumsq: 369910421 } destroy: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } create: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } get_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } set_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } quotactl: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } prealloc: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: id.205a.touch.19715 snapshot_time: 1713492357.590478309 secs.nsecs start_time: 1713492357.590465019 secs.nsecs elapsed_time: 0.000013290 secs.nsecs read_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } write_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } read: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } write: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setattr: { samples: 1, unit: usecs, min: 405, max: 405, sum: 405, sumsq: 164025 } punch: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } destroy: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } create: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } get_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } set_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } quotactl: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } prealloc: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: id.205a.dd.15197 snapshot_time: 1713492359.640816053 secs.nsecs start_time: 1713492359.510981725 secs.nsecs elapsed_time: 0.129834328 secs.nsecs read_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } write_bytes: { samples: 1, unit: bytes, min: 1048576, max: 1048576, sum: 1048576, sumsq: 1099511627776, hist: { 1M: 1 } } read: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } write: { samples: 1, unit: usecs, min: 10962, max: 10962, sum: 10962, sumsq: 120165444 } getattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } punch: { samples: 1, unit: usecs, min: 417, max: 417, sum: 417, sumsq: 173889 } sync: { samples: 1, unit: usecs, min: 20673, max: 20673, sum: 20673, sumsq: 427372929 } destroy: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } create: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } get_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } set_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } quotactl: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } prealloc: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: id.205a.dd.740 snapshot_time: 1713492361.263378875 secs.nsecs start_time: 1713492361.263352898 secs.nsecs elapsed_time: 0.000025977 secs.nsecs read_bytes: { samples: 1, unit: bytes, min: 1048576, max: 1048576, sum: 1048576, sumsq: 1099511627776, hist: { 1M: 1 } } write_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } read: { samples: 1, unit: usecs, min: 6367, max: 6367, sum: 6367, sumsq: 40538689 } write: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } punch: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } destroy: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } create: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } get_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } set_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } quotactl: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } prealloc: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: id.205a.truncate.23174 snapshot_time: 1713492362.832963765 secs.nsecs start_time: 1713492362.832946885 secs.nsecs elapsed_time: 0.000016880 secs.nsecs read_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } write_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } read: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } write: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } punch: { samples: 1, unit: usecs, min: 524, max: 524, sum: 524, sumsq: 274576 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } destroy: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } create: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } get_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } set_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } quotactl: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } prealloc: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: .touch.0 snapshot_time: 1713492383.333747807 secs.nsecs start_time: 1713492383.333738572 secs.nsecs elapsed_time: 0.000009235 secs.nsecs read_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } write_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } read: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } write: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setattr: { samples: 1, unit: usecs, min: 255, max: 255, sum: 255, sumsq: 65025 } punch: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } destroy: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } create: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } get_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } set_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } quotactl: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } prealloc: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: 205e.dd.0 snapshot_time: 1713492397.360224667 secs.nsecs start_time: 1713492396.698675875 secs.nsecs elapsed_time: 0.661548792 secs.nsecs read_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } write_bytes: { samples: 10, unit: bytes, min: 1048576, max: 1048576, sum: 10485760, sumsq: 10995116277760, hist: { 1M: 10 } } read: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } write: { samples: 10, unit: usecs, min: 2656, max: 5334, sum: 32902, sumsq: 113984704 } getattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } punch: { samples: 1, unit: usecs, min: 312, max: 312, sum: 312, sumsq: 97344 } sync: { samples: 10, unit: usecs, min: 9924, max: 11282, sum: 106549, sumsq: 1137711927 } destroy: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } create: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } get_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } set_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } quotactl: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } prealloc: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } obdfilter.lustre-OST0001.job_stats= job_stats: - job_id: touch.0 snapshot_time: 1713492300.661476362 secs.nsecs start_time: 1713490498.485571577 secs.nsecs elapsed_time: 1802.175904785 secs.nsecs read_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } write_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } read: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } write: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setattr: { samples: 19, unit: usecs, min: 164, max: 22202, sum: 84610, sumsq: 1630882082 } punch: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } destroy: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } create: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } get_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } set_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } quotactl: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } prealloc: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: cat.0 snapshot_time: 1713492225.308005181 secs.nsecs start_time: 1713491954.210187509 secs.nsecs elapsed_time: 271.097817672 secs.nsecs read_bytes: { samples: 2, unit: bytes, min: 0, max: 4096, sum: 4096, sumsq: 16777216, hist: { 1: 1, 4K: 1 } } write_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } read: { samples: 2, unit: usecs, min: 127, max: 211, sum: 338, sumsq: 60650 } write: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } punch: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } destroy: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } create: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } get_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } set_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } quotactl: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } prealloc: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: bash.0 snapshot_time: 1713492255.764789213 secs.nsecs start_time: 1713491981.041720730 secs.nsecs elapsed_time: 274.723068483 secs.nsecs read_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } write_bytes: { samples: 2, unit: bytes, min: 4, max: 7, sum: 11, sumsq: 65, hist: { 4: 1, 8: 1 } } read: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } write: { samples: 2, unit: usecs, min: 164, max: 166, sum: 330, sumsq: 54452 } getattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } punch: { samples: 1, unit: usecs, min: 416, max: 416, sum: 416, sumsq: 173056 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } destroy: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } create: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } get_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } set_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } quotactl: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } prealloc: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: multiop.0 snapshot_time: 1713492120.983050638 secs.nsecs start_time: 1713492119.726258310 secs.nsecs elapsed_time: 1.256792328 secs.nsecs read_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } write_bytes: { samples: 10, unit: bytes, min: 1048576, max: 1048576, sum: 10485760, sumsq: 10995116277760, hist: { 1M: 10 } } read: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } write: { samples: 10, unit: usecs, min: 3714, max: 18450, sum: 105531, sumsq: 1296427265 } getattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } punch: { samples: 1, unit: usecs, min: 8247, max: 8247, sum: 8247, sumsq: 68013009 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } destroy: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } create: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } get_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } set_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } quotactl: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } prealloc: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: dd.0 snapshot_time: 1713492390.575563173 secs.nsecs start_time: 1713492120.462005564 secs.nsecs elapsed_time: 270.113557609 secs.nsecs read_bytes: { samples: 11, unit: bytes, min: 4096, max: 1048576, sum: 10485760, sumsq: 10986559897600, hist: { 4K: 1, 1M: 10 } } write_bytes: { samples: 72, unit: bytes, min: 1048576, max: 1048576, sum: 75497472, sumsq: 79164837199872, hist: { 1M: 72 } } read: { samples: 11, unit: usecs, min: 165, max: 5938, sum: 37369, sumsq: 154631755 } write: { samples: 72, unit: usecs, min: 2432, max: 7492, sum: 287506, sumsq: 1240190488 } getattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } punch: { samples: 1, unit: usecs, min: 392, max: 392, sum: 392, sumsq: 153664 } sync: { samples: 2, unit: usecs, min: 21585, max: 31475, sum: 53060, sumsq: 1456587850 } destroy: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } create: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } get_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } set_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } quotactl: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } prealloc: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: cp.0 snapshot_time: 1713492240.654414983 secs.nsecs start_time: 1713492224.409194707 secs.nsecs elapsed_time: 16.245220276 secs.nsecs read_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } write_bytes: { samples: 32, unit: bytes, min: 1716, max: 1048576, sum: 32507572, sumsq: 34084863405712, hist: { 2K: 1, 1M: 31 } } read: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } write: { samples: 32, unit: usecs, min: 113, max: 10237, sum: 147653, sumsq: 789247579 } getattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } punch: { samples: 2, unit: usecs, min: 485, max: 18016, sum: 18501, sumsq: 324811481 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } destroy: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } create: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } get_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } set_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } quotactl: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } prealloc: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: lfs.0 snapshot_time: 1713492292.885849290 secs.nsecs start_time: 1713492224.631770873 secs.nsecs elapsed_time: 68.254078417 secs.nsecs read_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } write_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } read: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } write: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getattr: { samples: 2, unit: usecs, min: 20486, max: 116136, sum: 136622, sumsq: 13907246692 } setattr: { samples: 5, unit: usecs, min: 124, max: 262, sum: 1030, sumsq: 225020 } punch: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } destroy: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } create: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 1, unit: usecs, min: 46, max: 46, sum: 46, sumsq: 2116 } get_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } set_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } quotactl: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } prealloc: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: cmp.0 snapshot_time: 1713492242.654541751 secs.nsecs start_time: 1713492224.695950620 secs.nsecs elapsed_time: 17.958591131 secs.nsecs read_bytes: { samples: 3, unit: bytes, min: 4096, max: 1048576, sum: 1986560, sumsq: 1971675201536, hist: { 4K: 1, 1M: 2 } } write_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } read: { samples: 3, unit: usecs, min: 146, max: 6143, sum: 10290, sumsq: 53765766 } write: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } punch: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } destroy: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } create: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } get_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } set_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } quotactl: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } prealloc: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: S.root.touch.0.oleg216-client.v snapshot_time: 1713492374.192801077 secs.nsecs start_time: 1713492371.043128698 secs.nsecs elapsed_time: 3.149672379 secs.nsecs read_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } write_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } read: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } write: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setattr: { samples: 2, unit: usecs, min: 195, max: 381, sum: 576, sumsq: 183186 } punch: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } destroy: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } create: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } get_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } set_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } quotactl: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } prealloc: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: S.root.touch.0.oleg216-client.E snapshot_time: 1713492372.631746215 secs.nsecs start_time: 1713492372.631735530 secs.nsecs elapsed_time: 0.000010685 secs.nsecs read_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } write_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } read: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } write: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setattr: { samples: 1, unit: usecs, min: 233, max: 233, sum: 233, sumsq: 54289 } punch: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } destroy: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } create: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } get_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } set_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } quotactl: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } prealloc: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: "has\x20sp.touch.0" snapshot_time: 1713492383.783305691 secs.nsecs start_time: 1713492383.783296113 secs.nsecs elapsed_time: 0.000009578 secs.nsecs read_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } write_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } read: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } write: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setattr: { samples: 1, unit: usecs, min: 179, max: 179, sum: 179, sumsq: 32041 } punch: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } destroy: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } create: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } get_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } set_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } quotactl: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } prealloc: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } mdt.lustre-MDT0000.job_stats= job_stats: - job_id: .touch.0 snapshot_time: 1713492383.335445673 secs.nsecs start_time: 1713492383.324353143 secs.nsecs elapsed_time: 0.011092530 secs.nsecs open: { samples: 1, unit: usecs, min: 1310, max: 1310, sum: 1310, sumsq: 1716100 } close: { samples: 1, unit: usecs, min: 263, max: 263, sum: 263, sumsq: 69169 } mknod: { samples: 1, unit: usecs, min: 1072, max: 1072, sum: 1072, sumsq: 1149184 } link: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } unlink: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } mkdir: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } rmdir: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } rename: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getattr: { samples: 1, unit: usecs, min: 104, max: 104, sum: 104, sumsq: 10816 } setattr: { samples: 1, unit: usecs, min: 294, max: 294, sum: 294, sumsq: 86436 } getxattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setxattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } samedir_rename: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } parallel_rename_file: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } parallel_rename_dir: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } crossdir_rename: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } read: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } write: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } read_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } write_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } punch: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } migrate: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } fallocate: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: root.lfs.0 snapshot_time: 1713492383.765286744 secs.nsecs start_time: 1713492383.758302317 secs.nsecs elapsed_time: 0.006984427 secs.nsecs open: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } close: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } mknod: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } link: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } unlink: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } mkdir: { samples: 1, unit: usecs, min: 1242, max: 1242, sum: 1242, sumsq: 1542564 } rmdir: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } rename: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getattr: { samples: 1, unit: usecs, min: 83, max: 83, sum: 83, sumsq: 6889 } setattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getxattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setxattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } samedir_rename: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } parallel_rename_file: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } parallel_rename_dir: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } crossdir_rename: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } read: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } write: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } read_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } write_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } punch: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } migrate: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } fallocate: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: dd.0 snapshot_time: 1713492390.567903730 secs.nsecs start_time: 1713492387.144496301 secs.nsecs elapsed_time: 3.423407429 secs.nsecs open: { samples: 2, unit: usecs, min: 615, max: 1360, sum: 1975, sumsq: 2227825 } close: { samples: 2, unit: usecs, min: 269, max: 858, sum: 1127, sumsq: 808525 } mknod: { samples: 1, unit: usecs, min: 1080, max: 1080, sum: 1080, sumsq: 1166400 } link: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } unlink: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } mkdir: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } rmdir: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } rename: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setattr: { samples: 1, unit: usecs, min: 2142, max: 2142, sum: 2142, sumsq: 4588164 } getxattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setxattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } samedir_rename: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } parallel_rename_file: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } parallel_rename_dir: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } crossdir_rename: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } read: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } write: { samples: 1, unit: usecs, min: 10426, max: 10426, sum: 10426, sumsq: 108701476 } read_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } write_bytes: { samples: 1, unit: bytes, min: 1048576, max: 1048576, sum: 1048576, sumsq: 1099511627776, hist: { 1M: 1 } } punch: { samples: 1, unit: usecs, min: 314, max: 314, sum: 314, sumsq: 98596 } migrate: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } fallocate: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: mkdir.0 snapshot_time: 1713492390.247586223 secs.nsecs start_time: 1713492390.244284700 secs.nsecs elapsed_time: 0.003301523 secs.nsecs open: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } close: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } mknod: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } link: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } unlink: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } mkdir: { samples: 1, unit: usecs, min: 1222, max: 1222, sum: 1222, sumsq: 1493284 } rmdir: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } rename: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getattr: { samples: 1, unit: usecs, min: 94, max: 94, sum: 94, sumsq: 8836 } setattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getxattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setxattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 1, unit: usecs, min: 30, max: 30, sum: 30, sumsq: 900 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } samedir_rename: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } parallel_rename_file: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } parallel_rename_dir: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } crossdir_rename: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } read: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } write: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } read_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } write_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } punch: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } migrate: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } fallocate: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: lfs.0 snapshot_time: 1713492390.274289468 secs.nsecs start_time: 1713492390.252058599 secs.nsecs elapsed_time: 0.022230869 secs.nsecs open: { samples: 2, unit: usecs, min: 821, max: 2504, sum: 3325, sumsq: 6944057 } close: { samples: 2, unit: usecs, min: 176, max: 197, sum: 373, sumsq: 69785 } mknod: { samples: 1, unit: usecs, min: 607, max: 607, sum: 607, sumsq: 368449 } link: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } unlink: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } mkdir: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } rmdir: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } rename: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getattr: { samples: 3, unit: usecs, min: 43, max: 53, sum: 146, sumsq: 7158 } setattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getxattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setxattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } samedir_rename: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } parallel_rename_file: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } parallel_rename_dir: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } crossdir_rename: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } read: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } write: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } read_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } write_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } punch: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } migrate: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } fallocate: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: mv.0 snapshot_time: 1713492390.593174464 secs.nsecs start_time: 1713492390.580663020 secs.nsecs elapsed_time: 0.012511444 secs.nsecs open: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } close: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } mknod: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } link: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } unlink: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } mkdir: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } rmdir: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } rename: { samples: 1, unit: usecs, min: 1460, max: 1460, sum: 1460, sumsq: 2131600 } getattr: { samples: 1, unit: usecs, min: 158, max: 158, sum: 158, sumsq: 24964 } setattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getxattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setxattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } samedir_rename: { samples: 1, unit: usecs, min: 1460, max: 1460, sum: 1460, sumsq: 2131600 } parallel_rename_file: { samples: 1, unit: usecs, min: 1460, max: 1460, sum: 1460, sumsq: 2131600 } parallel_rename_dir: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } crossdir_rename: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } read: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } write: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } read_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } write_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } punch: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } migrate: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } fallocate: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: rm.0 snapshot_time: 1713492394.332352751 secs.nsecs start_time: 1713492394.272246360 secs.nsecs elapsed_time: 0.060106391 secs.nsecs open: { samples: 2, unit: usecs, min: 212, max: 224, sum: 436, sumsq: 95120 } close: { samples: 2, unit: usecs, min: 175, max: 214, sum: 389, sumsq: 76421 } mknod: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } link: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } unlink: { samples: 1, unit: usecs, min: 1398, max: 1398, sum: 1398, sumsq: 1954404 } mkdir: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } rmdir: { samples: 1, unit: usecs, min: 917, max: 917, sum: 917, sumsq: 840889 } rename: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getattr: { samples: 2, unit: usecs, min: 56, max: 86, sum: 142, sumsq: 10532 } setattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getxattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setxattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } samedir_rename: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } parallel_rename_file: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } parallel_rename_dir: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } crossdir_rename: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } read: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } write: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } read_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } write_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } punch: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } migrate: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } fallocate: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: 205e.lfs.0 snapshot_time: 1713492396.687228405 secs.nsecs start_time: 1713492396.652987711 secs.nsecs elapsed_time: 0.034240694 secs.nsecs open: { samples: 2, unit: usecs, min: 863, max: 2554, sum: 3417, sumsq: 7267685 } close: { samples: 2, unit: usecs, min: 147, max: 445, sum: 592, sumsq: 219634 } mknod: { samples: 1, unit: usecs, min: 648, max: 648, sum: 648, sumsq: 419904 } link: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } unlink: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } mkdir: { samples: 1, unit: usecs, min: 1105, max: 1105, sum: 1105, sumsq: 1221025 } rmdir: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } rename: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getattr: { samples: 4, unit: usecs, min: 42, max: 84, sum: 228, sumsq: 14024 } setattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getxattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setxattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } samedir_rename: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } parallel_rename_file: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } parallel_rename_dir: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } crossdir_rename: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } read: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } write: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } read_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } write_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } punch: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } migrate: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } fallocate: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: 205e.dd.0 snapshot_time: 1713492397.362072895 secs.nsecs start_time: 1713492396.691474622 secs.nsecs elapsed_time: 0.670598273 secs.nsecs open: { samples: 1, unit: usecs, min: 551, max: 551, sum: 551, sumsq: 303601 } close: { samples: 1, unit: usecs, min: 434, max: 434, sum: 434, sumsq: 188356 } mknod: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } link: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } unlink: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } mkdir: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } rmdir: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } rename: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setattr: { samples: 1, unit: usecs, min: 487, max: 487, sum: 487, sumsq: 237169 } getxattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setxattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } sync: { samples: 1, unit: usecs, min: 8100, max: 8100, sum: 8100, sumsq: 65610000 } samedir_rename: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } parallel_rename_file: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } parallel_rename_dir: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } crossdir_rename: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } read: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } write: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } read_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } write_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } punch: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } migrate: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } fallocate: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } obdfilter.lustre-OST0000.job_stats= job_stats: - job_id: touch.0 snapshot_time: 1713492300.661499232 secs.nsecs start_time: 1713492127.135849251 secs.nsecs elapsed_time: 173.525649981 secs.nsecs read_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } write_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } read: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } write: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setattr: { samples: 10, unit: usecs, min: 206, max: 28828, sum: 31348, sumsq: 831808842 } punch: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } destroy: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } create: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } get_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } set_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } quotactl: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } prealloc: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: cp.0 snapshot_time: 1713492239.849371081 secs.nsecs start_time: 1713492224.502879396 secs.nsecs elapsed_time: 15.346491685 secs.nsecs read_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } write_bytes: { samples: 1, unit: bytes, min: 771, max: 771, sum: 771, sumsq: 594441, hist: { 1K: 1 } } read: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } write: { samples: 1, unit: usecs, min: 201, max: 201, sum: 201, sumsq: 40401 } getattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } punch: { samples: 1, unit: usecs, min: 418, max: 418, sum: 418, sumsq: 174724 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } destroy: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } create: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } get_info: { samples: 1, unit: usecs, min: 46, max: 46, sum: 46, sumsq: 2116 } set_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } quotactl: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } prealloc: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: lfs.0 snapshot_time: 1713492292.883829986 secs.nsecs start_time: 1713492224.631867953 secs.nsecs elapsed_time: 68.251962033 secs.nsecs read_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } write_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } read: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } write: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setattr: { samples: 3, unit: usecs, min: 210, max: 240, sum: 663, sumsq: 147069 } punch: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } destroy: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } create: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 1, unit: usecs, min: 26, max: 26, sum: 26, sumsq: 676 } get_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } set_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } quotactl: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } prealloc: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: cmp.0 snapshot_time: 1713492242.775076616 secs.nsecs start_time: 1713492224.716355905 secs.nsecs elapsed_time: 18.058720711 secs.nsecs read_bytes: { samples: 3, unit: bytes, min: 4096, max: 1048576, sum: 1986560, sumsq: 1971675201536, hist: { 4K: 1, 1M: 2 } } write_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } read: { samples: 3, unit: usecs, min: 113, max: 4793, sum: 8692, sumsq: 37319414 } write: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } punch: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } destroy: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } create: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } get_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } set_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } quotactl: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } prealloc: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: bash.0 snapshot_time: 1713492255.827813239 secs.nsecs start_time: 1713492224.754148474 secs.nsecs elapsed_time: 31.073664765 secs.nsecs read_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } write_bytes: { samples: 1, unit: bytes, min: 4, max: 4, sum: 4, sumsq: 16, hist: { 4: 1 } } read: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } write: { samples: 1, unit: usecs, min: 145, max: 145, sum: 145, sumsq: 21025 } getattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } punch: { samples: 1, unit: usecs, min: 389, max: 389, sum: 389, sumsq: 151321 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } destroy: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } create: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } get_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } set_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } quotactl: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } prealloc: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: cat.0 snapshot_time: 1713492225.286240269 secs.nsecs start_time: 1713492225.286212839 secs.nsecs elapsed_time: 0.000027430 secs.nsecs read_bytes: { samples: 1, unit: bytes, min: 4096, max: 4096, sum: 4096, sumsq: 16777216, hist: { 4K: 1 } } write_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } read: { samples: 1, unit: usecs, min: 143, max: 143, sum: 143, sumsq: 20449 } write: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } punch: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } destroy: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } create: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } get_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } set_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } quotactl: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } prealloc: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: chown.0 snapshot_time: 1713492230.640460629 secs.nsecs start_time: 1713492230.640444279 secs.nsecs elapsed_time: 0.000016350 secs.nsecs read_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } write_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } read: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } write: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setattr: { samples: 1, unit: usecs, min: 390, max: 390, sum: 390, sumsq: 152100 } punch: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } destroy: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } create: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } get_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } set_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } quotactl: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } prealloc: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: dd.0 snapshot_time: 1713492240.775004213 secs.nsecs start_time: 1713492238.641270981 secs.nsecs elapsed_time: 2.133733232 secs.nsecs read_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } write_bytes: { samples: 33, unit: bytes, min: 933888, max: 1048576, sum: 34488320, sumsq: 36056518885376, hist: { 1M: 33 } } read: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } write: { samples: 33, unit: usecs, min: 2933, max: 5855, sum: 143516, sumsq: 637121658 } getattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } punch: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } destroy: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } create: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } get_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } set_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } quotactl: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } prealloc: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: multiop.0 snapshot_time: 1713492283.662051337 secs.nsecs start_time: 1713492283.611195054 secs.nsecs elapsed_time: 0.050856283 secs.nsecs read_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } write_bytes: { samples: 2, unit: bytes, min: 1000, max: 2000, sum: 3000, sumsq: 5000000, hist: { 1K: 1, 2K: 1 } } read: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } write: { samples: 2, unit: usecs, min: 55, max: 76, sum: 131, sumsq: 8801 } getattr: { samples: 2, unit: usecs, min: 14, max: 35, sum: 49, sumsq: 1421 } setattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } punch: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } sync: { samples: 2, unit: usecs, min: 13214, max: 13975, sum: 27189, sumsq: 369910421 } destroy: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } create: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } get_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } set_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } quotactl: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } prealloc: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: id.205a.touch.19715 snapshot_time: 1713492357.590478309 secs.nsecs start_time: 1713492357.590465019 secs.nsecs elapsed_time: 0.000013290 secs.nsecs read_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } write_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } read: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } write: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setattr: { samples: 1, unit: usecs, min: 405, max: 405, sum: 405, sumsq: 164025 } punch: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } destroy: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } create: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } get_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } set_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } quotactl: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } prealloc: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: id.205a.dd.15197 snapshot_time: 1713492359.640816053 secs.nsecs start_time: 1713492359.510981725 secs.nsecs elapsed_time: 0.129834328 secs.nsecs read_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } write_bytes: { samples: 1, unit: bytes, min: 1048576, max: 1048576, sum: 1048576, sumsq: 1099511627776, hist: { 1M: 1 } } read: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } write: { samples: 1, unit: usecs, min: 10962, max: 10962, sum: 10962, sumsq: 120165444 } getattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } punch: { samples: 1, unit: usecs, min: 417, max: 417, sum: 417, sumsq: 173889 } sync: { samples: 1, unit: usecs, min: 20673, max: 20673, sum: 20673, sumsq: 427372929 } destroy: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } create: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } get_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } set_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } quotactl: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } prealloc: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: id.205a.dd.740 snapshot_time: 1713492361.263378875 secs.nsecs start_time: 1713492361.263352898 secs.nsecs elapsed_time: 0.000025977 secs.nsecs read_bytes: { samples: 1, unit: bytes, min: 1048576, max: 1048576, sum: 1048576, sumsq: 1099511627776, hist: { 1M: 1 } } write_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } read: { samples: 1, unit: usecs, min: 6367, max: 6367, sum: 6367, sumsq: 40538689 } write: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } punch: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } destroy: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } create: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } get_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } set_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } quotactl: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } prealloc: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: id.205a.truncate.23174 snapshot_time: 1713492362.832963765 secs.nsecs start_time: 1713492362.832946885 secs.nsecs elapsed_time: 0.000016880 secs.nsecs read_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } write_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } read: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } write: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } punch: { samples: 1, unit: usecs, min: 524, max: 524, sum: 524, sumsq: 274576 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } destroy: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } create: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } get_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } set_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } quotactl: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } prealloc: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: .touch.0 snapshot_time: 1713492383.333747807 secs.nsecs start_time: 1713492383.333738572 secs.nsecs elapsed_time: 0.000009235 secs.nsecs read_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } write_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } read: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } write: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setattr: { samples: 1, unit: usecs, min: 255, max: 255, sum: 255, sumsq: 65025 } punch: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } destroy: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } create: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } get_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } set_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } quotactl: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } prealloc: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: 205e.dd.0 snapshot_time: 1713492397.360224667 secs.nsecs start_time: 1713492396.698675875 secs.nsecs elapsed_time: 0.661548792 secs.nsecs read_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } write_bytes: { samples: 10, unit: bytes, min: 1048576, max: 1048576, sum: 10485760, sumsq: 10995116277760, hist: { 1M: 10 } } read: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } write: { samples: 10, unit: usecs, min: 2656, max: 5334, sum: 32902, sumsq: 113984704 } getattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } punch: { samples: 1, unit: usecs, min: 312, max: 312, sum: 312, sumsq: 97344 } sync: { samples: 10, unit: usecs, min: 9924, max: 11282, sum: 106549, sumsq: 1137711927 } destroy: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } create: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } get_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } set_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } quotactl: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } prealloc: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } obdfilter.lustre-OST0001.job_stats= job_stats: - job_id: touch.0 snapshot_time: 1713492300.661476362 secs.nsecs start_time: 1713490498.485571577 secs.nsecs elapsed_time: 1802.175904785 secs.nsecs read_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } write_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } read: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } write: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setattr: { samples: 19, unit: usecs, min: 164, max: 22202, sum: 84610, sumsq: 1630882082 } punch: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } destroy: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } create: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } get_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } set_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } quotactl: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } prealloc: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: cat.0 snapshot_time: 1713492225.308005181 secs.nsecs start_time: 1713491954.210187509 secs.nsecs elapsed_time: 271.097817672 secs.nsecs read_bytes: { samples: 2, unit: bytes, min: 0, max: 4096, sum: 4096, sumsq: 16777216, hist: { 1: 1, 4K: 1 } } write_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } read: { samples: 2, unit: usecs, min: 127, max: 211, sum: 338, sumsq: 60650 } write: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } punch: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } destroy: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } create: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } get_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } set_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } quotactl: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } prealloc: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: bash.0 snapshot_time: 1713492255.764789213 secs.nsecs start_time: 1713491981.041720730 secs.nsecs elapsed_time: 274.723068483 secs.nsecs read_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } write_bytes: { samples: 2, unit: bytes, min: 4, max: 7, sum: 11, sumsq: 65, hist: { 4: 1, 8: 1 } } read: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } write: { samples: 2, unit: usecs, min: 164, max: 166, sum: 330, sumsq: 54452 } getattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } punch: { samples: 1, unit: usecs, min: 416, max: 416, sum: 416, sumsq: 173056 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } destroy: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } create: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } get_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } set_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } quotactl: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } prealloc: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: multiop.0 snapshot_time: 1713492120.983050638 secs.nsecs start_time: 1713492119.726258310 secs.nsecs elapsed_time: 1.256792328 secs.nsecs read_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } write_bytes: { samples: 10, unit: bytes, min: 1048576, max: 1048576, sum: 10485760, sumsq: 10995116277760, hist: { 1M: 10 } } read: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } write: { samples: 10, unit: usecs, min: 3714, max: 18450, sum: 105531, sumsq: 1296427265 } getattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } punch: { samples: 1, unit: usecs, min: 8247, max: 8247, sum: 8247, sumsq: 68013009 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } destroy: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } create: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } get_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } set_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } quotactl: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } prealloc: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: dd.0 snapshot_time: 1713492390.575563173 secs.nsecs start_time: 1713492120.462005564 secs.nsecs elapsed_time: 270.113557609 secs.nsecs read_bytes: { samples: 11, unit: bytes, min: 4096, max: 1048576, sum: 10485760, sumsq: 10986559897600, hist: { 4K: 1, 1M: 10 } } write_bytes: { samples: 72, unit: bytes, min: 1048576, max: 1048576, sum: 75497472, sumsq: 79164837199872, hist: { 1M: 72 } } read: { samples: 11, unit: usecs, min: 165, max: 5938, sum: 37369, sumsq: 154631755 } write: { samples: 72, unit: usecs, min: 2432, max: 7492, sum: 287506, sumsq: 1240190488 } getattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } punch: { samples: 1, unit: usecs, min: 392, max: 392, sum: 392, sumsq: 153664 } sync: { samples: 2, unit: usecs, min: 21585, max: 31475, sum: 53060, sumsq: 1456587850 } destroy: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } create: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } get_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } set_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } quotactl: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } prealloc: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: cp.0 snapshot_time: 1713492240.654414983 secs.nsecs start_time: 1713492224.409194707 secs.nsecs elapsed_time: 16.245220276 secs.nsecs read_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } write_bytes: { samples: 32, unit: bytes, min: 1716, max: 1048576, sum: 32507572, sumsq: 34084863405712, hist: { 2K: 1, 1M: 31 } } read: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } write: { samples: 32, unit: usecs, min: 113, max: 10237, sum: 147653, sumsq: 789247579 } getattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } punch: { samples: 2, unit: usecs, min: 485, max: 18016, sum: 18501, sumsq: 324811481 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } destroy: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } create: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } get_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } set_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } quotactl: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } prealloc: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: lfs.0 snapshot_time: 1713492292.885849290 secs.nsecs start_time: 1713492224.631770873 secs.nsecs elapsed_time: 68.254078417 secs.nsecs read_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } write_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } read: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } write: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getattr: { samples: 2, unit: usecs, min: 20486, max: 116136, sum: 136622, sumsq: 13907246692 } setattr: { samples: 5, unit: usecs, min: 124, max: 262, sum: 1030, sumsq: 225020 } punch: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } destroy: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } create: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 1, unit: usecs, min: 46, max: 46, sum: 46, sumsq: 2116 } get_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } set_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } quotactl: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } prealloc: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: cmp.0 snapshot_time: 1713492242.654541751 secs.nsecs start_time: 1713492224.695950620 secs.nsecs elapsed_time: 17.958591131 secs.nsecs read_bytes: { samples: 3, unit: bytes, min: 4096, max: 1048576, sum: 1986560, sumsq: 1971675201536, hist: { 4K: 1, 1M: 2 } } write_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } read: { samples: 3, unit: usecs, min: 146, max: 6143, sum: 10290, sumsq: 53765766 } write: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } punch: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } destroy: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } create: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } get_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } set_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } quotactl: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } prealloc: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: S.root.touch.0.oleg216-client.v snapshot_time: 1713492374.192801077 secs.nsecs start_time: 1713492371.043128698 secs.nsecs elapsed_time: 3.149672379 secs.nsecs read_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } write_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } read: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } write: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setattr: { samples: 2, unit: usecs, min: 195, max: 381, sum: 576, sumsq: 183186 } punch: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } destroy: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } create: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } get_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } set_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } quotactl: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } prealloc: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: S.root.touch.0.oleg216-client.E snapshot_time: 1713492372.631746215 secs.nsecs start_time: 1713492372.631735530 secs.nsecs elapsed_time: 0.000010685 secs.nsecs read_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } write_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } read: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } write: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setattr: { samples: 1, unit: usecs, min: 233, max: 233, sum: 233, sumsq: 54289 } punch: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } destroy: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } create: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } get_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } set_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } quotactl: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } prealloc: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: "has\x20sp.touch.0" snapshot_time: 1713492383.783305691 secs.nsecs start_time: 1713492383.783296113 secs.nsecs elapsed_time: 0.000009578 secs.nsecs read_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } write_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } read: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } write: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setattr: { samples: 1, unit: usecs, min: 179, max: 179, sum: 179, sumsq: 32041 } punch: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } destroy: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } create: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } get_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } set_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } quotactl: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } prealloc: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } --- timestamp: 1713492398 top_jobs: - dd.0: {ops: 127, op: 2, cl: 2, mn: 1, sa: 1, sy: 2, rd: 11, wr: 106, pu: 2} - cp.0: {ops: 37, wr: 33, pu: 3, gi: 1} - touch.0: {ops: 29, sa: 29} - 205e.dd.0: {ops: 25, op: 1, cl: 1, sa: 1, sy: 11, wr: 10, pu: 1} - lfs.0: {ops: 20, op: 2, cl: 2, mn: 1, ga: 5, sa: 8, st: 2} - multiop.0: {ops: 17, ga: 2, sy: 2, wr: 12, pu: 1} - 205e.lfs.0: {ops: 10, op: 2, cl: 2, mn: 1, mk: 1, ga: 4} - rm.0: {ops: 8, op: 2, cl: 2, ul: 1, rm: 1, ga: 2} - .touch.0: {ops: 6, op: 1, cl: 1, mn: 1, ga: 1, sa: 2} - cmp.0: {ops: 6, rd: 6} - bash.0: {ops: 5, wr: 3, pu: 2} - mkdir.0: {ops: 3, mk: 1, ga: 1, st: 1} - cat.0: {ops: 3, rd: 3} - id.205a.dd.15197: {ops: 3, sy: 1, wr: 1, pu: 1} - root.lfs.0: {ops: 2, mk: 1, ga: 1} - mv.0: {ops: 2, mv: 1, ga: 1} - S.root.touch.0.oleg216-client.v: {ops: 2, sa: 2} - chown.0: {ops: 1, sa: 1} - id.205a.touch.19715: {ops: 1, sa: 1} - id.205a.dd.740: {ops: 1, rd: 1} - id.205a.truncate.23174: {ops: 1, pu: 1} - S.root.touch.0.oleg216-client.E: {ops: 1, sa: 1} - has sp.touch.0: {ops: 1, sa: 1} ... jobid_name=%e.%u jobid_var=procname_uid PASS 205e (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 205f: verify qos_ost_weights YAML format == 22:06:41 (1713492401) - { ost_idx: 0, tgt_weight: 14880768, tgt_penalty: 0, tgt_penalty_per_obj: 1516736, tgt_avail: 14880768, tgt_last_used: 1713492187, svr_nid: 192.168.202.116@tcp, svr_bavail: 29888512, svr_iavail: 1, svr_penalty: 0, svr_penalty_per_obj: 758888, svr_last_used: 1713492187 } - { ost_idx: 1, tgt_weight: 12676272, tgt_penalty: 0, tgt_penalty_per_obj: 1518816, tgt_avail: 14954496, tgt_last_used: 1713492187, svr_nid: 192.168.202.116@tcp, svr_bavail: 29888512, svr_iavail: 1, svr_penalty: 0, svr_penalty_per_obj: 758888, svr_last_used: 1713492187 } PASS 205f (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 205g: stress test for job_stats procfile == 22:06:46 (1713492406) mdt.lustre-MDT0000.job_cleanup_interval=5 jobid_var=TEST205G_ID jobid_name=%j.%p mdt.lustre-MDT0000.job_stats=clear /home/green/git/lustre-release/lustre/tests/test-framework.sh: line 4538: 1335 Terminated while true; do printf $DIR/$tfile.{0001..1000} | xargs -P10 -n1 touch; done (wd: ~) /home/green/git/lustre-release/lustre/tests/test-framework.sh: line 4538: 1336 Terminated __test_205_jobstats_dump 4 (wd: ~) jobid_name=%e.%u jobid_var=procname_uid mdt.lustre-MDT0000.job_cleanup_interval=600 PASS 205g (93s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 205h: check jobid xattr is stored correctly ========================================================== 22:08:22 (1713492502) mdt.lustre-MDT0000.job_xattr=user.job jobid_var=procname.uid getfattr: Removing leading '/' from absolute path names getfattr: Removing leading '/' from absolute path names mdt.lustre-MDT0000.job_xattr=NONE mdt.lustre-MDT0000.job_xattr=trusted.job getfattr: Removing leading '/' from absolute path names jobid_var=procname_uid mdt.lustre-MDT0000.job_xattr=user.job PASS 205h (5s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 205i: check job_xattr parameter accepts and rejects values correctly ========================================================== 22:08:29 (1713492509) mdt.lustre-MDT0000.job_xattr=user.1234567 oleg216-server: error: set_param: setting /sys/fs/lustre/mdt/lustre-MDT0000/job_xattr=user.12345678: Invalid argument oleg216-server: error: set_param: setting 'mdt/*/job_xattr'='user.12345678': Invalid argument pdsh@oleg216-client: oleg216-server: ssh exited with exit code 22 oleg216-server: error: set_param: setting /sys/fs/lustre/mdt/lustre-MDT0000/job_xattr=userjob: Invalid argument oleg216-server: error: set_param: setting 'mdt/*/job_xattr'='userjob': Invalid argument pdsh@oleg216-client: oleg216-server: ssh exited with exit code 22 oleg216-server: error: set_param: setting /sys/fs/lustre/mdt/lustre-MDT0000/job_xattr=user.job/: Invalid argument oleg216-server: error: set_param: setting 'mdt/*/job_xattr'='user.job/': Invalid argument pdsh@oleg216-client: oleg216-server: ssh exited with exit code 22 oleg216-server: error: set_param: setting /sys/fs/lustre/mdt/lustre-MDT0000/job_xattr=user.job€: Invalid argument oleg216-server: error: set_param: setting 'mdt/*/job_xattr'='user.job€': Invalid argument pdsh@oleg216-client: oleg216-server: ssh exited with exit code 123 mdt.lustre-MDT0000.job_xattr=user.job PASS 205i (5s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 206: fail lov_init_raid0() doesn't lbug === 22:08:36 (1713492516) fail_loc=0xa0001403 fail_val=1 PASS 206 (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 207a: can refresh layout at glimpse ======= 22:08:41 (1713492521) 4+0 records in 4+0 records out 16384 bytes (16 kB) copied, 0.0111666 s, 1.5 MB/s fail_loc=0x170 PASS 207a (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 207b: can refresh layout at open ========== 22:08:46 (1713492526) 6+0 records in 6+0 records out 24576 bytes (25 kB) copied, 0.0150157 s, 1.6 MB/s fail_loc=0x171 checksum is 91ff0dac5df86e798bfef5e573536b08 /mnt/lustre/f207b.sanity PASS 207b (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 208: Exclusive open ======================= 22:08:51 (1713492531) ==== test 1: verify get lease work read lease(1) has applied. ==== test 2: verify lease can be broken by upcoming open no lease applied. ==== test 3: verify lease can't be granted if an open already exists multiop: cannot get READ lease, ext 0: Device or resource busy (16) multiop: apply/unlock lease error: Device or resource busy ==== test 4: lease can sustain over recovery Failing mds1 on oleg216-server Stopping /mnt/lustre-mds1 (opts:) on oleg216-server 22:09:00 (1713492540) shut down Failover mds1 to oleg216-server mount facets: mds1 Starting mds1: -o localrecov lustre-mdt1/mdt1 /mnt/lustre-mds1 oleg216-server: oleg216-server.virtnet: executing set_default_debug all all pdsh@oleg216-client: oleg216-server: ssh exited with exit code 1 Started lustre-MDT0000 22:09:14 (1713492554) targets are mounted 22:09:14 (1713492554) facet_failover done oleg216-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec read lease(1) has applied. ==== test 5: lease broken can't be regained by replay Failing mds1 on oleg216-server Stopping /mnt/lustre-mds1 (opts:) on oleg216-server 22:09:22 (1713492562) shut down Failover mds1 to oleg216-server mount facets: mds1 Starting mds1: -o localrecov lustre-mdt1/mdt1 /mnt/lustre-mds1 oleg216-server: oleg216-server.virtnet: executing set_default_debug all all pdsh@oleg216-client: oleg216-server: ssh exited with exit code 1 Started lustre-MDT0000 22:09:35 (1713492575) targets are mounted 22:09:35 (1713492575) facet_failover done oleg216-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec no lease applied. PASS 208 (51s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 209: read-only open/close requests should be freed promptly ========================================================== 22:09:44 (1713492584) before: 18, after: 19 PASS 209 (10s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 210: lfs getstripe does not break leases == 22:09:56 (1713492596) /mnt/lustre/f210.sanity lmm_stripe_count: 1 lmm_stripe_size: 4194304 lmm_pattern: raid0 lmm_layout_gen: 0 lmm_stripe_offset: 0 obdidx objid objid group 0 61026 0xee62 0x240000bd1 write lease(2) has applied. /mnt/lustre/f210.sanity lmm_stripe_count: 1 lmm_stripe_size: 4194304 lmm_pattern: raid0 lmm_layout_gen: 0 lmm_stripe_offset: 0 obdidx objid objid group 0 61026 0xee62 0x240000bd1 read lease(1) has applied. PASS 210 (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 211: failed mirror split doesn't break write lease ========================================================== 22:10:03 (1713492603) 10+0 records in 10+0 records out 40960 bytes (41 kB) copied, 0.222834 s, 184 kB/s 1+0 records in 1+0 records out 4096 bytes (4.1 kB) copied, 0.0287348 s, 143 kB/s /mnt/lustre/f211.sanity lcm_layout_gen: 2 lcm_mirror_count: 2 lcm_entry_count: 2 lcme_id: 65537 lcme_mirror_id: 1 lcme_flags: init lcme_extent.e_start: 0 lcme_extent.e_end: EOF lmm_stripe_count: 1 lmm_stripe_size: 4194304 lmm_pattern: raid0 lmm_layout_gen: 0 lmm_stripe_offset: 1 lmm_objects: - 0: { l_ost_idx: 1, l_fid: [0x280000402:0xeb83:0x0] } lcme_id: 131073 lcme_mirror_id: 2 lcme_flags: init,stale lcme_extent.e_start: 0 lcme_extent.e_end: EOF lmm_stripe_count: 1 lmm_stripe_size: 4194304 lmm_pattern: raid0 lmm_layout_gen: 0 lmm_stripe_offset: 0 lmm_objects: - 0: { l_ost_idx: 0, l_fid: [0x240000bd1:0xee63:0x0] } lfs mirror split: cannot destroy the last non-stale mirror of file '/mnt/lustre/f211.sanity' write lease(2) has applied. PASS 211 (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 212: Sendfile test ====================================================================================================== 22:10:08 (1713492608) 4738+0 records in 4738+0 records out 4851712 bytes (4.9 MB) copied, 0.758068 s, 6.4 MB/s PASS 212 (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 213: OSC lock completion and cancel race don't crash - bug 18829 ========================================================== 22:10:14 (1713492614) 4+0 records in 4+0 records out 16384 bytes (16 kB) copied, 0.0145383 s, 1.1 MB/s fail_loc=0x8000040f PASS 213 (13s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 214: hash-indexed directory test - bug 20133 ========================================================== 22:10:29 (1713492629) total 27 drwxr-xr-x 2 root root 27136 Apr 18 22:10 d214c a0 a1 a10 a100 a101 a102 a103 a104 a105 a106 a107 a108 a109 a11 a110 a111 a112 a113 a114 a115 a116 a117 a118 a119 a12 a120 a121 a122 a123 a124 a125 a126 a127 a128 a129 a13 a130 a131 a132 a133 a134 a135 a136 a137 a138 a139 a14 a140 a141 a142 a143 a144 a145 a146 a147 a148 a149 a15 a150 a151 a152 a153 a154 a155 a156 a157 a158 a159 a16 a160 a161 a162 a163 a164 a165 a166 a167 a168 a169 a17 a170 a171 a172 a173 a174 a175 a176 a177 a178 a179 a18 a180 a181 a182 a183 a184 a185 a186 a187 a188 a189 a19 a190 a191 a192 a193 a194 a195 a196 a197 a198 a199 a2 a20 a200 a201 a202 a203 a204 a205 a206 a207 a208 a209 a21 a210 a211 a212 a213 a214 a215 a216 a217 a218 a219 a22 a220 a221 a222 a223 a224 a225 a226 a227 a228 a229 a23 a230 a231 a232 a233 a234 a235 a236 a237 a238 a239 a24 a240 a241 a242 a243 a244 a245 a246 a247 a248 a249 a25 a250 a251 a252 a253 a254 a255 a256 a257 a258 a259 a26 a260 a261 a262 a263 a264 a265 a266 a267 a268 a269 a27 a270 a271 a272 a273 a274 a275 a276 a277 a278 a279 a28 a280 a281 a282 a283 a284 a285 a286 a287 a288 a289 a29 a290 a291 a292 a293 a294 a295 a296 a297 a298 a299 a3 a30 a300 a301 a302 a303 a304 a305 a306 a307 a308 a309 a31 a310 a311 a312 a313 a314 a315 a316 a317 a318 a319 a32 a320 a321 a322 a323 a324 a325 a326 a327 a328 a329 a33 a330 a331 a332 a333 a334 a335 a336 a337 a338 a339 a34 a35 a36 a37 a38 a39 a4 a40 a41 a42 a43 a44 a45 a46 a47 a48 a49 a5 a50 a51 a52 a53 a54 a55 a56 a57 a58 a59 a6 a60 a61 a62 a63 a64 a65 a66 a67 a68 a69 a7 a70 a71 a72 a73 a74 a75 a76 a77 a78 a79 a8 a80 a81 a82 a83 a84 a85 a86 a87 a88 a89 a9 a90 a91 a92 a93 a94 a95 a96 a97 a98 a99 PASS 214 (14s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 215: lnet exists and has proper content - bugs 18102, 21079, 21517 ========================================================== 22:10:45 (1713492645) 0 334 0 439983 302747 0 0 877812352 754795676 0 0 PASS 215 (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 216: check lockless direct write updates file size and kms correctly ========================================================== 22:10:50 (1713492650) error: get_param: param_path 'osc/*/contention_seconds': No such file or directory error: set_param: param_path 'osc/*/contention_seconds': No such file or directory error: set_param: setting 'osc/*/contention_seconds'='60': No such file or directory directio on /mnt/lustre/f216.sanity for 10x4096 bytes PASS /mnt/lustre/f216.sanity has size 40960 OK error: set_param: param_path 'osc/*/contention_seconds': No such file or directory error: set_param: setting 'osc/*/contention_seconds'='0': No such file or directory 0+0 records in 0+0 records out 0 bytes (0 B) copied, 0.00360746 s, 0.0 kB/s /mnt/lustre/f216.sanity has size 0 OK error: set_param: setting : Invalid argument PASS 216 (7s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 217: check lctl ping for hostnames with embedded hyphen ('-') ========================================================== 22:11:00 (1713492660) node: 'oleg216-client.virtnet', nid: '192.168.202.16', node_ip='192.168.202.16' lctl ping node oleg216-client.virtnet@tcp 12345-0@lo 192.168.202.16@tcp node: 'oleg216-server', nid: '192.168.202.116', node_ip='192.168.202.116' lctl ping node oleg216-server@tcp 12345-0@lo 192.168.202.116@tcp PASS 217 (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 218: parallel read and truncate should not deadlock ========================================================== 22:11:06 (1713492666) creating a 10 Mb file starting reads truncating the file 2560+0 records in 2560+0 records out 10485760 bytes (10 MB) copied, 0.317998 s, 33.0 MB/s killing dd wait until dd is finished removing the temporary file PASS 218 (17s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 219: LU-394: Write partial won't cause uncontiguous pages vec at LND ========================================================== 22:11:26 (1713492686) 1+0 records in 1+0 records out 1024 bytes (1.0 kB) copied, 0.0110136 s, 93.0 kB/s fail_loc=0x411 1+0 records in 1+0 records out 4096 bytes (4.1 kB) copied, 0.0504513 s, 81.2 kB/s fail_loc=0 1+0 records in 1+0 records out 4096 bytes (4.1 kB) copied, 0.00167163 s, 2.5 MB/s fail_loc=0x411 1+0 records in 1+0 records out 1024 bytes (1.0 kB) copied, 0.000572437 s, 1.8 MB/s 1+0 records in 1+0 records out 1024 bytes (1.0 kB) copied, 0.0348257 s, 29.4 kB/s /mnt/lustre/f219.sanity-2 has size 1024 OK PASS 219 (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 220: preallocated MDS objects still used if ENOSPC from OST ========================================================== 22:11:31 (1713492691) UUID Inodes IUsed IFree IUse% Mounted on lustre-MDT0000_UUID 553149 4605 548544 1% /mnt/lustre[MDT:0] lustre-OST0000_UUID 128514 11138 117376 9% /mnt/lustre[OST:0] lustre-OST0001_UUID 128616 11080 117536 9% /mnt/lustre[OST:1] filesystem_summary: 239517 4605 234912 2% /mnt/lustre fail_val=-1 fail_loc=0x229 oleg216-server: Pool lustre.test_220 created oleg216-server: OST lustre-OST0000_UUID added to pool lustre.test_220 preallocated objects on MDS is 16 (61217 - 61201) OST still has 0 kbytes free create 16 files @next_id... total: 16 open/close in 0.16 seconds: 101.42 ops/second after creation, last_id=61217, next_id=61217 UUID Inodes IUsed IFree IUse% Mounted on lustre-MDT0000_UUID 553154 4610 548544 1% /mnt/lustre[MDT:0] lustre-OST0000_UUID 11138 11138 0 100% /mnt/lustre[OST:0] lustre-OST0001_UUID 11080 11080 0 100% /mnt/lustre[OST:1] filesystem_summary: 4610 4610 0 100% /mnt/lustre cleanup... fail_val=0 fail_loc=0 oleg216-server: OST lustre-OST0000_UUID removed from pool lustre.test_220 oleg216-server: Pool lustre.test_220 destroyed unlink 16 files @61201... - unlinked 0 (time 1713492703 ; total 0 ; last 0) total: 16 unlinks in 1 seconds: 16.000000 unlinks/second Destroy the created pools: test_220 PASS 220 (15s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 221: make sure fault and truncate race to not cause OOM ========================================================== 22:11:48 (1713492708) 121+1 records in 121+1 records out 62200 bytes (62 kB) copied, 3.04433 s, 20.4 kB/s fail_loc=0x80001401 PASS 221 (6s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 222a: AGL for ls should not trigger CLIO lock failure ========================================================== 22:11:56 (1713492716) total: 10 open/close in 0.10 seconds: 96.97 ops/second fail_loc=0x31a fail_loc=0 PASS 222a (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 222b: AGL for rmdir should not trigger CLIO lock failure ========================================================== 22:12:01 (1713492721) total: 10 open/close in 0.10 seconds: 97.66 ops/second fail_loc=0x31a fail_loc=0 PASS 222b (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 223: osc reenqueue if without AGL lock granted ================================================================================= 22:12:06 (1713492726) total: 10 open/close in 0.10 seconds: 97.78 ops/second fail_loc=0x31b fail_loc=0 PASS 223 (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 224a: Don't panic on bulk IO failure ====== 22:12:12 (1713492732) fail_loc=0x508 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 1.15631 s, 907 kB/s fail_loc=0 Filesystem 1K-blocks Used Available Use% Mounted on 192.168.202.116@tcp:/lustre 7542784 25600 7513088 1% /mnt/lustre PASS 224a (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 224b: Don't panic on bulk IO failure ====== 22:12:18 (1713492738) 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0183047 s, 57.3 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0483706 s, 21.7 MB/s at_max=0 at_max=0 fail_val=3 fail_loc=0x80000515 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 3.07918 s, 341 kB/s fail_loc=0 Filesystem 1K-blocks Used Available Use% Mounted on 192.168.202.116@tcp:/lustre 7542784 26624 7510016 1% /mnt/lustre at_max=600 at_max=600 PASS 224b (8s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 224c: Don't hang if one of md lost during large bulk RPC ========================================================== 22:12:28 (1713492748) oleg216-server: error: get_param: param_path 'osd-*/*/writethrough_cache_enable': No such file or directory oleg216-server: error: get_param: param_path 'osd-*/*/writethrough_cache_enable': No such file or directory oleg216-server: error: set_param: param_path 'osd-*/lustre-OST*/writethrough_cache_enable': No such file or directory oleg216-server: error: set_param: setting 'osd-*/lustre-OST*/writethrough_cache_enable'='1': No such file or directory pdsh@oleg216-client: oleg216-server: ssh exited with exit code 2 error: set_param: setting /proc/fs/lustre/osc/lustre-OST0000-osc-ffff8800add22800/max_pages_per_rpc=1024: Numerical result out of range error: set_param: setting /proc/fs/lustre/osc/lustre-OST0001-osc-ffff8800add22800/max_pages_per_rpc=1024: Numerical result out of range error: set_param: setting 'osc/*/max_pages_per_rpc'='1024': Numerical result out of range Setting lustre.sys.at_max from 600 to 0 Waiting 90s for '0' Updated after 2s: want '0' got '0' Setting lustre.sys.timeout from 20 to 5 Waiting 90s for '5' Updated after 2s: want '5' got '5' fail_loc=0x520 1+0 records in 1+0 records out 8000000 bytes (8.0 MB) copied, 0.192367 s, 41.6 MB/s fail_loc=0 Setting lustre.sys.at_max from 0 to 600 Waiting 90s for '600' Updated after 2s: want '600' got '600' Setting lustre.sys.timeout from 5 to 20 Waiting 90s for '20' Updated after 2s: want '20' got '20' oleg216-server: error: set_param: setting : Invalid argument pdsh@oleg216-client: oleg216-server: ssh exited with exit code 22 oleg216-server: error: set_param: setting : Invalid argument pdsh@oleg216-client: oleg216-server: ssh exited with exit code 22 PASS 224c (21s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 224d: Don't corrupt data on bulk IO timeout ========================================================== 22:12:51 (1713492771) 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0144931 s, 72.4 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.051057 s, 20.5 MB/s at_max=0 at_max=0 fail_val=22 fail_loc=0x80000515 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 21.1041 s, 49.7 kB/s fail_loc=0 Filesystem 1K-blocks Used Available Use% Mounted on 192.168.202.116@tcp:/lustre 7542784 28672 7510016 1% /mnt/lustre at_max=600 at_max=600 PASS 224d (26s) debug_raw_pointers=0 debug_raw_pointers=0 SKIP: sanity test_225a skipping excluded test 225a (base 225) SKIP: sanity test_225b skipping excluded test 225b (base 225) debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 226a: call path2fid and fid2path on files of all type ========================================================== 22:13:21 (1713492801) pass with /mnt/lustre/d226a.sanity/fifo and 0x200002342:0x443f:0x0 pass with /mnt/lustre/d226a.sanity/null and 0x200002342:0x4440:0x0 pass with /mnt/lustre/d226a.sanity/none and 0x200002342:0x4441:0x0 pass with /mnt/lustre/d226a.sanity/dir and 0x200002342:0x4442:0x0 pass with /mnt/lustre/d226a.sanity/loop0 and 0x200002342:0x4443:0x0 pass with /mnt/lustre/d226a.sanity/file and 0x200002342:0x4444:0x0 pass with /mnt/lustre/d226a.sanity/link and 0x200002342:0x4445:0x0 pass with /mnt/lustre/d226a.sanity/sock and 0x200002342:0x4446:0x0 PASS 226a (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 226b: call path2fid and fid2path on files of all type under remote dir ========================================================== 22:13:25 (1713492805) SKIP: sanity test_226b needs >= 2 MDTs SKIP 226b (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 226c: call path2fid and fid2path under remote dir with subdir mount ========================================================== 22:13:28 (1713492808) SKIP: sanity test_226c needs >= 2 MDTs SKIP 226c (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 226d: verify fid2path with -n and -fn option ========================================================== 22:13:31 (1713492811) PASS 226d (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 226e: Verify path2fid -0 option with newline and space ========================================================== 22:13:35 (1713492815) PASS 226e (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 227: running truncated executable does not cause OOM ========================================================== 22:13:39 (1713492819) 1+0 records in 1+0 records out 1024 bytes (1.0 kB) copied, 0.00673877 s, 152 kB/s /home/green/git/lustre-release/lustre/tests/sanity.sh: line 21864: 18122 Segmentation fault $MOUNT/date > /dev/null PASS 227 (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 228a: try to reuse idle OI blocks ========= 22:13:43 (1713492823) SKIP: sanity test_228a ldiskfs only test SKIP 228a (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 228b: idle OI blocks can be reused after MDT restart ========================================================== 22:13:46 (1713492826) SKIP: sanity test_228b ldiskfs only test SKIP 228b (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 228c: NOT shrink the last entry in OI index node to recycle idle leaf ========================================================== 22:13:49 (1713492829) SKIP: sanity test_228c ldiskfs only test SKIP 228c (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 229: getstripe/stat/rm/attr changes work on released files ========================================================== 22:13:52 (1713492832) /mnt/lustre/f229.sanity lmm_magic: 0x0BD10BD0 lmm_seq: 0x200002342 lmm_object_id: 0x444e lmm_fid: [0x200002342:0x444e:0x0] lmm_stripe_count: 2 lmm_stripe_size: 4194304 lmm_pattern: released lmm_layout_gen: 0 lmm_stripe_offset: 0 File: '/mnt/lustre/f229.sanity' Size: 0 Blocks: 0 IO Block: 4194304 regular empty file Device: 2c54f966h/743766374d Inode: 144115339507024974 Links: 1 Access: (0644/-rw-r--r--) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2024-04-18 22:13:52.000000000 -0400 Modify: 2024-04-18 22:13:52.000000000 -0400 Change: 2024-04-18 22:13:52.000000000 -0400 Birth: - PASS 229 (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 230a: Create remote directory and files under the remote directory ========================================================== 22:13:55 (1713492835) SKIP: sanity test_230a needs >= 2 MDTs SKIP 230a (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 230b: migrate directory =================== 22:13:58 (1713492838) SKIP: sanity test_230b needs >= 2 MDTs SKIP 230b (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 230c: check directory accessiblity if migration failed ========================================================== 22:14:01 (1713492841) SKIP: sanity test_230c needs >= 2 MDTs SKIP 230c (1s) debug_raw_pointers=0 debug_raw_pointers=0 SKIP: sanity test_230d skipping SLOW test 230d debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 230e: migrate mulitple local link files === 22:14:05 (1713492845) SKIP: sanity test_230e needs >= 2 MDTs SKIP 230e (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 230f: migrate mulitple remote link files == 22:14:07 (1713492847) SKIP: sanity test_230f needs >= 2 MDTs SKIP 230f (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 230g: migrate dir to non-exist MDT ======== 22:14:10 (1713492850) SKIP: sanity test_230g needs >= 2 MDTs SKIP 230g (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 230h: migrate .. and root ================= 22:14:13 (1713492853) SKIP: sanity test_230h needs >= 2 MDTs SKIP 230h (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 230i: lfs migrate -m tolerates trailing slashes ========================================================== 22:14:16 (1713492856) SKIP: sanity test_230i needs >= 2 MDTs SKIP 230i (0s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 230j: DoM file data not changed after dir migration ========================================================== 22:14:18 (1713492858) SKIP: sanity test_230j needs >= 2 MDTs SKIP 230j (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 230k: file data not changed after dir migration ========================================================== 22:14:21 (1713492861) SKIP: sanity test_230k needs >= 4 MDTs SKIP 230k (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 230l: readdir between MDTs won't crash ==== 22:14:24 (1713492864) SKIP: sanity test_230l needs >= 2 MDTs SKIP 230l (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 230m: xattrs not changed after dir migration ========================================================== 22:14:27 (1713492867) SKIP: sanity test_230m needs >= 2 MDTs SKIP 230m (0s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 230n: Dir migration with mirrored file ==== 22:14:29 (1713492869) SKIP: sanity test_230n needs >= 2 MDTs SKIP 230n (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 230o: dir split =========================== 22:14:32 (1713492872) SKIP: sanity test_230o needs >= 2 MDTs SKIP 230o (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 230p: dir merge =========================== 22:14:35 (1713492875) SKIP: sanity test_230p needs >= 2 MDTs SKIP 230p (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 230q: dir auto split ====================== 22:14:38 (1713492878) SKIP: sanity test_230q needs >= 2 MDTs SKIP 230q (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 230r: migrate with too many local locks === 22:14:42 (1713492882) SKIP: sanity test_230r needs >= 2 MDTs SKIP 230r (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 230s: lfs mkdir should return -EEXIST if target exists ========================================================== 22:14:46 (1713492886) mdt.lustre-MDT0000.enable_dir_restripe=0 lfs setdirstripe: cannot create dir '/mnt/lustre/d230s.sanity': File exists mdt.lustre-MDT0000.enable_dir_restripe=1 lfs setdirstripe: cannot create dir '/mnt/lustre/d230s.sanity': File exists mdt.lustre-MDT0000.enable_dir_restripe=0 PASS 230s (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 230t: migrate directory with project ID set ========================================================== 22:14:52 (1713492892) SKIP: sanity test_230t needs >= 2 MDTs SKIP 230t (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 230u: migrate directory by QOS ============ 22:14:56 (1713492896) SKIP: sanity test_230u needs >= 4 MDTs SKIP 230u (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 230v: subdir migrated to the MDT where its parent is located ========================================================== 22:14:59 (1713492899) SKIP: sanity test_230v needs >= 4 MDTs SKIP 230v (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 230w: non-recursive mode dir migration ==== 22:15:03 (1713492903) SKIP: sanity test_230w needs >= 2 MDTs SKIP 230w (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 230x: dir migration check space =========== 22:15:06 (1713492906) SKIP: sanity test_230x needs >= 2 MDTs SKIP 230x (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 230y: unlink dir with bad hash type ======= 22:15:10 (1713492910) SKIP: sanity test_230y needs >= 2 MDTs SKIP 230y (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 230z: resume dir migration with bad hash type ========================================================== 22:15:13 (1713492913) SKIP: sanity test_230z needs >= 2 MDTs SKIP 230z (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 231a: checking that reading/writing of BRW RPC size results in one RPC ========================================================== 22:15:17 (1713492917) vm.dirty_writeback_centisecs = 0 vm.dirty_writeback_centisecs = 0 vm.dirty_ratio = 50 vm.dirty_background_ratio = 25 vm.dirty_writeback_centisecs = 500 vm.dirty_background_ratio = 10 vm.dirty_ratio = 20 PASS 231a (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 231b: must not assert on fully utilized OST request buffer ========================================================== 22:15:23 (1713492923) PASS 231b (8s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 232a: failed lock should not block umount ========================================================== 22:15:33 (1713492933) fail_loc=0x31c dd: failed to open '/mnt/lustre/d232a.sanity/f232a.sanity': Cannot allocate memory fail_loc=0 192.168.202.116@tcp:/lustre /mnt/lustre lustre rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project 0 0 Stopping client oleg216-client.virtnet /mnt/lustre (opts:) Starting client: oleg216-client.virtnet: -o user_xattr,flock oleg216-server@tcp:/lustre /mnt/lustre Stopping /mnt/lustre-ost1 (opts:) on oleg216-server Starting ost1: -o localrecov lustre-ost1/ost1 /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg216-server: oleg216-server.virtnet: executing set_default_debug all all pdsh@oleg216-client: oleg216-server: ssh exited with exit code 1 Started lustre-OST0000 PASS 232a (10s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 232b: failed data version lock should not block umount ========================================================== 22:15:45 (1713492945) 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0452381 s, 23.2 MB/s fail_loc=0x31c lfs data_version: cannot get version for '/mnt/lustre/d232b.sanity/f232b.sanity': Input/output error fail_loc=0 192.168.202.116@tcp:/lustre /mnt/lustre lustre rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project 0 0 Stopping client oleg216-client.virtnet /mnt/lustre (opts:) Starting client: oleg216-client.virtnet: -o user_xattr,flock oleg216-server@tcp:/lustre /mnt/lustre Stopping /mnt/lustre-ost1 (opts:) on oleg216-server Starting ost1: -o localrecov lustre-ost1/ost1 /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg216-server: oleg216-server.virtnet: executing set_default_debug all all pdsh@oleg216-client: oleg216-server: ssh exited with exit code 1 Started lustre-OST0000 PASS 232b (10s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 233a: checking that OBF of the FS root succeeds ========================================================== 22:15:57 (1713492957) PASS 233a (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 233b: checking that OBF of the FS .lustre succeeds ========================================================== 22:16:02 (1713492962) PASS 233b (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 234: xattr cache should not crash on ENOMEM ========================================================== 22:16:07 (1713492967) llite.lustre-ffff88012a52e000.xattr_cache=1 fail_loc=0x1405 /mnt/lustre/d234.sanity/f234.sanity: user.attr: Cannot allocate memory fail_loc=0x0 PASS 234 (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 235: LU-1715: flock deadlock detection does not work properly ========================================================== 22:16:12 (1713492972) 6646: taking lock1 [100, 200] 6646: done 6646 sleeping 2 6646: putting lock1 [100, 200] 6646: done 6646 Exit 6645: taking lock0 [0, 100] 6645: done 6645 sleeping 1 6645: taking lock3 [100, 300] 6645: expected deadlock 6645: putting lock0 [0, 100] 6645: done 6645 Exit 6644: sleeping 1 6644: taking lock2 [200, 300] 6644: done 6644: taking lock0 [0, 100] 6644: done 6644: putting lock0 [0, 100] 6644: done 6644: putting lock2 [200, 300] 6644: done 6644 Exit PASS 235 (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 236: Layout swap on open unlinked file ==== 22:16:18 (1713492978) PASS 236 (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 238: Verify linkea consistency ============ 22:16:24 (1713492984) PASS 238 (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 239A: osp_sync test ======================= 22:16:29 (1713492989) - open/close 2458 (time 1713493000.37 total 10.00 last 245.76) - open/close 4916 (time 1713493010.37 total 20.00 last 245.80) total: 5000 open/close in 20.33 seconds: 245.96 ops/second - unlinked 0 (time 1713493012 ; total 0 ; last 0) total: 5000 unlinks in 11 seconds: 454.545441 unlinks/second PASS 239A (38s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 239a: process invalid osp sync record correctly ========================================================== 22:17:08 (1713493028) fail_loc=0x2100 sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 239a (15s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 239b: process osp sync record with ENOMEM error correctly ========================================================== 22:17:26 (1713493046) fail_loc=0x2101 sleep 5 for ZFS zfs Waiting for MDT destroys to complete fail_loc=0 sleep 5 for ZFS zfs Waiting for MDT destroys to complete PASS 239b (14s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 240: race between ldlm enqueue and the connection RPC (no ASSERT) ========================================================== 22:17:42 (1713493062) SKIP: sanity test_240 needs >= 2 MDTs SKIP 240 (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 241a: bio vs dio ========================== 22:17:46 (1713493066) 1+0 records in 1+0 records out 40960 bytes (41 kB) copied, 0.0126241 s, 3.2 MB/s -rw-r--r-- 1 root root 40960 Apr 18 22:17 /mnt/lustre/f241a.sanity ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lock_unused_count=1 PASS 241a (34s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 241b: dio vs dio ========================== 22:18:22 (1713493102) 1+0 records in 1+0 records out 40960 bytes (41 kB) copied, 0.0126361 s, 3.2 MB/s -rw-r--r-- 1 root root 40960 Apr 18 22:18 /mnt/lustre/f241b.sanity PASS 241b (16s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 242: mdt_readpage failure should not cause directory unreadable ========================================================== 22:18:40 (1713493120) fail_loc=0x105 /bin/ls: reading directory /mnt/lustre/d242.sanity: Cannot allocate memory fail_loc=0 f242.sanity PASS 242 (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 243: various group lock tests ============= 22:18:46 (1713493126) Starting test test10 at 1713493127 Finishing test test10 at 1713493130 Starting test test11 at 1713493130 Finishing test test11 at 1713493164 Starting test test12 at 1713493164 Finishing test test12 at 1713493164 Starting test test20 at 1713493164 Finishing test test20 at 1713493164 Starting test test30 at 1713493164 Finishing test test30 at 1713493165 Starting test test40 at 1713493165 Finishing test test40 at 1713493165 PASS 243 (41s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 244a: sendfile with group lock tests ====== 22:19:29 (1713493169) 35+0 records in 35+0 records out 36700160 bytes (37 MB) copied, 0.884171 s, 41.5 MB/s Starting test test10 at 1713493171 Finishing test test10 at 1713493175 Starting test test11 at 1713493175 Finishing test test11 at 1713493181 Starting test test12 at 1713493181 Finishing test test12 at 1713493187 Starting test test13 at 1713493187 Finishing test test13 at 1713493193 Starting test test14 at 1713493193 Finishing test test14 at 1713493201 Starting test test15 at 1713493201 Finishing test test15 at 1713493202 Starting test test16 at 1713493202 Finishing test test16 at 1713493203 PASS 244a (36s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 244b: multi-threaded write with group lock ========================================================== 22:20:06 (1713493206) PASS 244b (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 245a: check mdc connection flag/data: multiple modify RPCs ========================================================== 22:20:10 (1713493210) connect_flags: [ write_grant, server_lock, version, acl, xattr, create_on_write, inode_bit_locks, getattr_by_fid, no_oh_for_devices, max_byte_per_rpc, early_lock_cancel, adaptive_timeouts, lru_resize, alt_checksum_algorithm, fid_is_enabled, version_recovery, pools, grant_shrink, large_ea, full20, layout_lock, 64bithash, jobstats, umask, einprogress, grant_param, lvb_type, short_io, flock_deadlock, disp_stripe, open_by_fid, lfsck, multi_mod_rpcs, dir_stripe, subtree, bulk_mbits, second_flags, file_secctx, dir_migrate, sum_statfs, overstriping, flr, lock_convert, archive_id_array, increasing_xid, selinux_policy, lsom, pcc, crush, async_discard, getattr_pfid, dom_lvb, reply_mbits, batch_rpc, atomic_open_lock, dmv_imp_inherit, unaligned_dio ] mdc.lustre-MDT0000-mdc-ffff88012a52e000.import= import: name: lustre-MDT0000-mdc-ffff88012a52e000 target: lustre-MDT0000_UUID state: FULL connect_flags: [ write_grant, server_lock, version, acl, xattr, create_on_write, inode_bit_locks, getattr_by_fid, no_oh_for_devices, max_byte_per_rpc, early_lock_cancel, adaptive_timeouts, lru_resize, alt_checksum_algorithm, fid_is_enabled, version_recovery, pools, grant_shrink, large_ea, full20, layout_lock, 64bithash, jobstats, umask, einprogress, grant_param, lvb_type, short_io, flock_deadlock, disp_stripe, open_by_fid, lfsck, multi_mod_rpcs, dir_stripe, subtree, bulk_mbits, second_flags, file_secctx, dir_migrate, sum_statfs, overstriping, flr, lock_convert, archive_id_array, increasing_xid, selinux_policy, lsom, pcc, crush, async_discard, getattr_pfid, dom_lvb, reply_mbits, batch_rpc, atomic_open_lock, dmv_imp_inherit, unaligned_dio ] connect_data: flags: 0xae7a5e7be344d3b8 instance: 7 target_version: 2.15.62.25 initial_grant: 3407872 max_brw_size: 1048576 ibits_known: 0x7f grant_block_size: 131072 grant_inode_size: 4096 grant_max_extent_size: 134217728 grant_extent_tax: 655360 cksum_types: 0xf7 max_easize: 65536 max_mod_rpcs: 8 import_flags: [ replayable, pingable, connect_tried ] connection: failover_nids: [ "192.168.202.116@tcp" ] nids_stats: "192.168.202.116@tcp": { connects: 1, replied: 1, uptodate: true, sec_ago: 263 } current_connection: "192.168.202.116@tcp" connection_attempts: 1 generation: 1 in-progress_invalidations: 0 idle: 3 sec rpcs: inflight: 0 unregistering: 0 timeouts: 0 avg_waittime: 1915 usecs service_estimates: services: 5 sec network: 5 sec transactions: last_replay: 0 peer_committed: 30064787997 last_checked: 30064787997 PASS 245a (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 245b: check osp connection flag/data: multiple modify RPCs ========================================================== 22:20:13 (1713493213) SKIP: sanity test_245b needs >= 2 MDTs SKIP 245b (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 247a: mount subdir as fileset ============= 22:20:16 (1713493216) Starting client: oleg216-client.virtnet: -o user_xattr,flock oleg216-server@tcp:/lustre/d247a.sanity /mnt/lustre_d247a.sanity 192.168.202.116@tcp:/lustre/d247a.sanity /mnt/lustre_d247a.sanity lustre rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project 0 0 Stopping client oleg216-client.virtnet /mnt/lustre_d247a.sanity (opts:) PASS 247a (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 247b: mount subdir that dose not exist ==== 22:20:20 (1713493220) Starting client: oleg216-client.virtnet: -o user_xattr,flock oleg216-server@tcp:/lustre/d247b.sanity /mnt/lustre_d247b.sanity mount.lustre: mount oleg216-server@tcp:/lustre/d247b.sanity at /mnt/lustre_d247b.sanity failed: No such file or directory Is the MGS specification correct? Is the filesystem name correct? If upgrading, is the copied client log valid? (see upgrade docs) PASS 247b (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 247c: running fid2path outside subdirectory root ========================================================== 22:20:23 (1713493223) Starting client: oleg216-client.virtnet: -o user_xattr,flock oleg216-server@tcp:/lustre/d247c.sanity /mnt/lustre_d247c.sanity lfs fid2path: cannot find /mnt/lustre_d247c.sanity [0x200000007:0x1:0x0]: No such file or directory 192.168.202.116@tcp:/lustre/d247c.sanity /mnt/lustre_d247c.sanity lustre rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project 0 0 Stopping client oleg216-client.virtnet /mnt/lustre_d247c.sanity (opts:) PASS 247c (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 247d: running fid2path inside subdirectory root ========================================================== 22:20:27 (1713493227) Starting client: oleg216-client.virtnet: -o user_xattr,flock oleg216-server@tcp:/lustre/d247d.sanity /mnt/lustre_d247d.sanity /mnt/lustre_d247d.sanity [0x2000032e2:0x13b4:0x0] /mnt/lustre_d247d.sanity/// [0x2000032e2:0x13b4:0x0] /mnt/lustre_d247d.sanity/dir1 [0x2000032e2:0x13b4:0x0] lfs fid2path: cannot resolve mount point for '/mnt/lustre_d247d.sanity_wrong': No such device 192.168.202.116@tcp:/lustre/d247d.sanity /mnt/lustre_d247d.sanity lustre rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project 0 0 Stopping client oleg216-client.virtnet /mnt/lustre_d247d.sanity (opts:) PASS 247d (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 247e: mount .. as fileset ================= 22:20:31 (1713493231) Starting client: oleg216-client.virtnet: -o user_xattr,flock oleg216-server@tcp:/lustre/.. /mnt/lustre_d247e.sanity mount.lustre: mount oleg216-server@tcp:/lustre/.. at /mnt/lustre_d247e.sanity failed: Invalid argument This may have multiple causes. Is 'lustre/..' the correct filesystem name? Are the mount options correct? Check the syslog for more info. PASS 247e (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 247f: mount striped or remote directory as fileset ========================================================== 22:20:35 (1713493235) SKIP: sanity test_247f needs >= 2 MDTs SKIP 247f (0s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 247g: striped directory submount revalidate ROOT from cache ========================================================== 22:20:37 (1713493237) SKIP: sanity test_247g needs > 1 MDTs SKIP 247g (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 247h: remote directory submount revalidate ROOT from cache ========================================================== 22:20:40 (1713493240) SKIP: sanity test_247h needs > 1 MDTs SKIP 247h (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 248a: fast read verification ============== 22:20:42 (1713493242) /mnt/lustre/f248a.sanity has size 134217728 OK Test 1: verify that fast read is 4 times faster on cache read Test 2: verify the performance between big and small read PASS 248a (38s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 248b: test short_io read and write for both small and large sizes ========================================================== 22:21:22 (1713493282) bs=53248 count=113 normal buffered write 113+0 records in 113+0 records out 6017024 bytes (6.0 MB) copied, 0.0626268 s, 96.1 MB/s bs=47008 count=128 oflag=dsync normal write f248b.sanity.0 128+0 records in 128+0 records out 6017024 bytes (6.0 MB) copied, 2.73301 s, 2.2 MB/s bs=11752 count=512 oflag=dsync small write f248b.sanity.1 512+0 records in 512+0 records out 6017024 bytes (6.0 MB) copied, 9.11267 s, 660 kB/s bs=4096 count=1469 iflag=direct small read f248b.sanity.1 1469+0 records in 1469+0 records out 6017024 bytes (6.0 MB) copied, 4.34289 s, 1.4 MB/s test invalid parameter 2MB error: set_param: setting /sys/fs/lustre/osc/lustre-OST0000-osc-ffff88012a52e000/short_io_bytes=2M: Numerical result out of range error: set_param: setting 'osc/lustre-OST0000*/short_io_bytes'='2M': Numerical result out of range test maximum parameter 512KB osc.lustre-OST0000-osc-ffff88012a52e000.short_io_bytes=512K osc.lustre-OST0000-osc-ffff88012a52e000.short_io_bytes=262144 test large parameter 64KB osc.lustre-OST0000-osc-ffff88012a52e000.short_io_bytes=65536 osc.lustre-OST0001-osc-ffff88012a52e000.short_io_bytes=65536 osc.lustre-OST0000-osc-ffff88012a52e000.short_io_bytes=65536 bs=47008 count=128 oflag=dsync large write f248b.sanity.2 128+0 records in 128+0 records out 6017024 bytes (6.0 MB) copied, 3.51357 s, 1.7 MB/s bs=53248 count=113 oflag=direct large write f248b.sanity.3 113+0 records in 113+0 records out 6017024 bytes (6.0 MB) copied, 1.8001 s, 3.3 MB/s bs=53248 count=113 iflag=direct large read f248b.sanity.2 113+0 records in 113+0 records out 6017024 bytes (6.0 MB) copied, 0.682549 s, 8.8 MB/s bs=53248 count=113 iflag=direct large read f248b.sanity.3 113+0 records in 113+0 records out 6017024 bytes (6.0 MB) copied, 0.641496 s, 9.4 MB/s osc.lustre-OST0000-osc-ffff88012a52e000.short_io_bytes=16384 osc.lustre-OST0001-osc-ffff88012a52e000.short_io_bytes=16384 PASS 248b (26s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 248c: verify whole file read behavior ===== 22:21:49 (1713493309) llite.lustre-ffff88012a52e000.read_ahead_stats=c llite.lustre-ffff88012a52e000.max_read_ahead_whole_mb=64 whole file readahead of 64 MiB took 34.1 seconds llite.lustre-ffff88012a52e000.read_ahead_stats= snapshot_time 1713493312.348003985 secs.nsecs start_time 1713493310.321262482 secs.nsecs elapsed_time 2.026741503 secs.nsecs hits 16382 samples [pages] misses 2 samples [pages] zero_size_window 1 samples [pages] failed_to_fast_read 3 samples [pages] readahead_pages 1 samples [pages] 16382 16382 16382 llite.lustre-ffff88012a52e000.read_ahead_stats=c llite.lustre-ffff88012a52e000.max_read_ahead_whole_mb=8 non-whole file readahead of 64 MiB took 32.8 seconds llite.lustre-ffff88012a52e000.read_ahead_stats= snapshot_time 1713493314.818133777 secs.nsecs start_time 1713493312.352364876 secs.nsecs elapsed_time 2.465768901 secs.nsecs hits 16382 samples [pages] misses 2 samples [pages] zero_size_window 1 samples [pages] failed_to_fast_read 3 samples [pages] readahead_pages 1 samples [pages] 16382 16382 16382 Test passed on attempt 1 llite.lustre-ffff88012a52e000.max_read_ahead_whole_mb=64 PASS 248c (7s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 249: Write above 2T file size ============= 22:21:58 (1713493318) 1+0 records in 1+0 records out 4096 bytes (4.1 kB) copied, 0.00461966 s, 887 kB/s PASS 249 (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 250: Write above 16T limit ================ 22:22:03 (1713493323) lfs: getstripe for '/mnt/lustre/f250.sanity' failed: No such file or directory SKIP: sanity test_250 no 16TB file size limit on ZFS SKIP 250 (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 251a: Handling short read and write correctly ========================================================== 22:22:06 (1713493326) fail_loc=0xa0001407 fail_val=1 fail_loc=0xa0001407 fail_val=1 PASS 251a (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 251b: short read restore offset correctly ========================================================== 22:22:11 (1713493331) 4+0 records in 4+0 records out 4096 bytes (4.1 kB) copied, 0.00759 s, 540 kB/s fail_loc=0x1431 fail_val=5 PASS 251b (7s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 252: check lr_reader tool ================= 22:22:20 (1713493340) SKIP: sanity test_252 ldiskfs only test SKIP 252 (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 253: Check object allocation limit ======== 22:22:23 (1713493343) fallocate on zfs doesn't consume space fallocate not supported SKIP: sanity test_253 need >= 2.13.57 and ldiskfs for fallocate SKIP 253 (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 254: Check changelog size ================= 22:22:26 (1713493346) 17472 mdd.lustre-MDT0000.changelog_mask=+hsm Registered 1 changelog users: 'cl1' lustre-MDT0000: clear the changelog for cl1 of all records Changelog size 25808 Changelog size after work 36240 lustre-MDT0000: clear the changelog for cl1 of all records lustre-MDT0000: Deregistered changelog user #1 PASS 254 (4s) debug_raw_pointers=0 debug_raw_pointers=0 SKIP: sanity test_255a skipping excluded test 255a (base 255) SKIP: sanity test_255b skipping excluded test 255b (base 255) SKIP: sanity test_255c skipping excluded test 255c (base 255) SKIP: sanity test_256 skipping excluded test 256 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 257: xattr locks are not lost ============= 22:22:36 (1713493356) File: '/mnt/lustre/d257.sanity' Size: 512 Blocks: 1 IO Block: 1048576 directory Device: 2c54f966h/743766374d Inode: 144115406615876553 Links: 2 Access: (0755/drwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2024-04-18 22:22:37.000000000 -0400 Modify: 2024-04-18 22:22:37.000000000 -0400 Change: 2024-04-18 22:22:37.000000000 -0400 Birth: - fail_val=0 fail_loc=0x80000161 Stopping /mnt/lustre-mds1 (opts:) on oleg216-server Starting mds1: -o localrecov lustre-mdt1/mdt1 /mnt/lustre-mds1 oleg216-server: oleg216-server.virtnet: executing set_default_debug all all pdsh@oleg216-client: oleg216-server: ssh exited with exit code 1 Started lustre-MDT0000 affected facets: mds1 oleg216-server: oleg216-server.virtnet: executing _wait_recovery_complete *.lustre-MDT0000.recovery_status 1475 oleg216-server: *.lustre-MDT0000.recovery_status status: COMPLETE PASS 257 (10s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 258a: verify i_mutex security behavior when suid attributes is set ========================================================== 22:22:48 (1713493368) fail_loc=0x141c running as uid/gid/euid/egid 500/500/500/500, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/f258a.sanity] [bs=4k] [count=1] [oflag=append] 1+0 records in 1+0 records out 4096 bytes (4.1 kB) copied, 0.00502161 s, 816 kB/s PASS 258a (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 258b: verify i_mutex security behavior ==== 22:22:53 (1713493373) fail_loc=0x141d running as uid/gid/euid/egid 500/500/500/500, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/f258b.sanity] [bs=4k] [count=1] [oflag=append] 1+0 records in 1+0 records out 4096 bytes (4.1 kB) copied, 0.00493726 s, 830 kB/s PASS 258b (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 259: crash at delayed truncate ============ 22:22:58 (1713493378) SKIP: sanity test_259 ldiskfs only test SKIP 259 (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 260: Check mdc_close fail ================= 22:23:02 (1713493382) fail_loc=0x80000806 PASS 260 (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 270a: DoM: basic functionality tests ====== 22:23:07 (1713493387) 192+0 records in 192+0 records out 196608 bytes (197 kB) copied, 0.0633509 s, 3.1 MB/s 3+0 records in 3+0 records out 196608 bytes (197 kB) copied, 0.0781151 s, 2.5 MB/s 1984+0 records in 1984+0 records out 2031616 bytes (2.0 MB) copied, 0.366653 s, 5.5 MB/s 31+0 records in 31+0 records out 2031616 bytes (2.0 MB) copied, 0.759909 s, 2.7 MB/s PASS 270a (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 270b: DoM: maximum size overflow checks for DoM-only file ========================================================== 22:23:14 (1713493394) truncate: cannot truncate '/mnt/lustre/d270b.sanity/dom_file' to length 1048577: File too large dd: error writing '/mnt/lustre/d270b.sanity/dom_file': No data available 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.00268724 s, 0.0 kB/s 1+0 records in 1+0 records out 1048573 bytes (1.0 MB) copied, 0.0412156 s, 25.4 MB/s /home/green/git/lustre-release/lustre/tests/sanity.sh: line 24830: echo: write error: File too large PASS 270b (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 270c: DoM: DoM EA inheritance tests ======= 22:23:19 (1713493399) PASS 270c (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 270d: DoM: change striping from DoM to RAID0 ========================================================== 22:23:24 (1713493404) PASS 270d (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 270e: DoM: lfs find with DoM files test === 22:23:28 (1713493408) total: 20 open/close in 0.19 seconds: 105.91 ops/second total: 10 open/close in 0.10 seconds: 103.64 ops/second Test 1: lfs find 20 DOM files by layout: OK Test 2: lfs find 1 DOM dir by layout: OK Test 4: lfs find 20 DOM files by stripe size: OK Test 5: lfs find no DOM files by stripe index: OK PASS 270e (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 270f: DoM: maximum DoM stripe size checks ========================================================== 22:23:34 (1713493414) lfs setstripe: cannot create composite file '/mnt/lustre/d270f.sanity/dom_file': Invalid argument oleg216-server: error: set_param: setting /sys/fs/lustre/lod/lustre-MDT0000-mdtlov/dom_stripesize=2147483648: Numerical result out of range oleg216-server: error: set_param: setting 'lod/lustre-MDT0000-mdtlov/dom_stripesize'='2147483648': Numerical result out of range pdsh@oleg216-client: oleg216-server: ssh exited with exit code 34 65536 PASS 270f (7s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 270g: DoM: default DoM stripe size depends on free space ========================================================== 22:23:43 (1713493423) DOM threshold is 50% free space Free space: 40%, default DOM stripe: 512K Free space: 20%, default DOM stripe: 256K Free space: 0%, default DOM stripe: 0K Free space: 15%, default DOM stripe: 256K Free space: 30%, default DOM stripe: 512K Free space: 55%, default DOM stripe: 1024K PASS 270g (10s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 270h: DoM: DoM stripe removal when disabled on server ========================================================== 22:23:55 (1713493435) PASS 270h (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 270i: DoM: setting invalid DoM striping should fail ========================================================== 22:24:01 (1713493441) lfs setstripe: Invalid pattern: '-L mdt', must be specified with -E: Invalid argument (22) lfs setstripe: Invalid pattern: '-L mdt', must be specified with -E: Invalid argument (22) Option 'stripe-count' can't be specified with Data-on-MDT component: 1152921504606846979 lfs setstripe: invalid layout Create a file with specified striping/composite layout, or set the default layout on an existing directory. Usage: setstripe [--component-add|--component-del|--delete|-d] [--comp-set --comp-id|-I COMP_ID|--comp-flags=COMP_FLAGS] [--component-end|-E END_OFFSET] [--copy=SOURCE_LAYOUT_FILE]|--yaml|-y YAML_TEMPLATE_FILE] [--extension-size|--ext-size|-z EXT_SIZE] [--help|-h] [--foreign=FOREIGN_TYPE --xattr|-x LAYOUT] [--layout|-L PATTERN] [--mode FILE_MODE] [--mirror-count|-N[MIRROR_COUNT]] [--ost|-o OST_INDEX[,OST_INDEX,...]] [--overstripe-count|-C STRIPE_COUNT] [--pool|-p POOL_NAME] [--stripe-count|-c STRIPE_COUNT] [--stripe-index|-i START_OST_IDX] [--stripe-size|-S STRIPE_SIZE] FILENAME|DIRECTORY Option 'stripe-count' can't be specified with Data-on-MDT component: 1152921504606846979 lfs setstripe: invalid layout Create a file with specified striping/composite layout, or set the default layout on an existing directory. Usage: setstripe [--component-add|--component-del|--delete|-d] [--comp-set --comp-id|-I COMP_ID|--comp-flags=COMP_FLAGS] [--component-end|-E END_OFFSET] [--copy=SOURCE_LAYOUT_FILE]|--yaml|-y YAML_TEMPLATE_FILE] [--extension-size|--ext-size|-z EXT_SIZE] [--help|-h] [--foreign=FOREIGN_TYPE --xattr|-x LAYOUT] [--layout|-L PATTERN] [--mode FILE_MODE] [--mirror-count|-N[MIRROR_COUNT]] [--ost|-o OST_INDEX[,OST_INDEX,...]] [--overstripe-count|-C STRIPE_COUNT] [--pool|-p POOL_NAME] [--stripe-count|-c STRIPE_COUNT] [--stripe-index|-i START_OST_IDX] [--stripe-size|-S STRIPE_SIZE] FILENAME|DIRECTORY PASS 270i (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 270j: DoM migration: DOM file to the OST-striped file (plain) ========================================================== 22:24:06 (1713493446) 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0607429 s, 17.3 MB/s PASS 270j (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 271a: DoM: data is cached for read after write ========================================================== 22:24:12 (1713493452) 1+0 records in 1+0 records out 4096 bytes (4.1 kB) copied, 0.00508454 s, 806 kB/s /mnt/lustre/d271a.sanity/dom PASS 271a (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 271b: DoM: no glimpse RPC for stat (DoM only file) ========================================================== 22:24:17 (1713493457) 1+0 records in 1+0 records out 4096 bytes (4.1 kB) copied, 0.00457693 s, 895 kB/s /mnt/lustre/d271b.sanity/dom has type file OK /mnt/lustre/d271b.sanity/dom has size 4096 OK /mnt/lustre/d271b.sanity/dom has type file OK /mnt/lustre/d271b.sanity/dom has size 4096 OK PASS 271b (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 271ba: DoM: no glimpse RPC for stat (combined file) ========================================================== 22:24:22 (1713493462) 1+0 records in 1+0 records out 2097152 bytes (2.1 MB) copied, 0.099346 s, 21.1 MB/s /mnt/lustre/d271ba.sanity/dom has type file OK /mnt/lustre/d271ba.sanity/dom has size 2097152 OK /mnt/lustre/d271ba.sanity/dom has type file OK /mnt/lustre/d271ba.sanity/dom has size 2097152 OK PASS 271ba (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 271c: DoM: IO lock at open saves enqueue RPCs ========================================================== 22:24:27 (1713493467) total: 1000 open/close in 4.04 seconds: 247.28 ops/second total: write 2043438 bytes in 12 seconds: 170286.50 bytes/second snapshot_time 1713493485.617077839 secs.nsecs start_time 1713493473.836025189 secs.nsecs elapsed_time 11.781052650 secs.nsecs req_waittime 4370 samples [usecs] 889 7456 11325190 31310931208 req_active 4370 samples [reqs] 1 9 7107 20837 ldlm_ibits_enqueue 2000 samples [reqs] 1 1 2000 2000 write_bytes 370 samples [bytes] 9 4087 772214 2141494328 ost_write 370 samples [usecs] 1579 7456 1460434 6155664452 mds_close 1000 samples [usecs] 1196 3706 2628898 6958990410 ldlm_cancel 1000 samples [usecs] 889 3328 2065848 4315860298 - unlinked 0 (time 1713493486 ; total 0 ; last 0) total: 1000 unlinks in 3 seconds: 333.333344 unlinks/second total: 1000 open/close in 4.01 seconds: 249.68 ops/second total: write 2043438 bytes in 9 seconds: 227048.67 bytes/second - unlinked 0 (time 1713493505 ; total 0 ; last 0) total: 1000 unlinks in 2 seconds: 500.000000 unlinks/second PASS 271c (43s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 271d: DoM: read on open (1K file in reply buffer) ========================================================== 22:25:12 (1713493512) 1+0 records in 1+0 records out 1000 bytes (1.0 kB) copied, 0.000273643 s, 3.7 MB/s 1+0 records in 1+0 records out 1000 bytes (1.0 kB) copied, 0.00338296 s, 296 kB/s Append to the same page ... DONE Open and read file ... DONE PASS 271d (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 271f: DoM: read on open (200K file and read tail) ========================================================== 22:25:18 (1713493518) 1+0 records in 1+0 records out 265000 bytes (265 kB) copied, 0.0049339 s, 53.7 MB/s 1+0 records in 1+0 records out 265000 bytes (265 kB) copied, 0.0149454 s, 17.7 MB/s Append to the same page ... DONE Open and read file ... DONE PASS 271f (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 271g: Discard DoM data vs client flush race ========================================================== 22:25:23 (1713493523) /mnt/lustre/f271g.sanity has type file OK fail_loc=0x80000314 PASS 271g (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 272a: DoM migration: new layout with the same DOM component ========================================================== 22:25:29 (1713493529) 1+0 records in 1+0 records out 524288 bytes (524 kB) copied, 0.0453414 s, 11.6 MB/s PASS 272a (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 272b: DoM migration: DOM file to the OST-striped file (plain) ========================================================== 22:25:34 (1713493534) 1+0 records in 1+0 records out 2097152 bytes (2.1 MB) copied, 0.123726 s, 16.9 MB/s PASS 272b (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 272c: DoM migration: DOM file to the OST-striped file (composite) ========================================================== 22:25:41 (1713493541) 1+0 records in 1+0 records out 2097152 bytes (2.1 MB) copied, 0.127472 s, 16.5 MB/s PASS 272c (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 272d: DoM mirroring: OST-striped mirror to DOM file ========================================================== 22:25:47 (1713493547) 1+0 records in 1+0 records out 2097152 bytes (2.1 MB) copied, 0.126764 s, 16.5 MB/s lfs mirror mirror: cannot get UNLOCK lease, ext 8: Invalid argument (22) PASS 272d (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 272e: DoM mirroring: DOM mirror to the OST-striped file ========================================================== 22:25:53 (1713493553) 1+0 records in 1+0 records out 2097152 bytes (2.1 MB) copied, 0.127067 s, 16.5 MB/s lfs mirror mirror: cannot get UNLOCK lease, ext 8: Invalid argument (22) PASS 272e (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 272f: DoM migration: OST-striped file to DOM file ========================================================== 22:26:00 (1713493560) 1+0 records in 1+0 records out 2097152 bytes (2.1 MB) copied, 0.123801 s, 16.9 MB/s lfs migrate: cannot get UNLOCK lease, ext 8: Invalid argument (22) /mnt/lustre/d272f.sanity/f272f.sanity /mnt/lustre/d272f.sanity/f272f.sanity PASS 272f (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 273a: DoM: layout swapping should fail with DOM ========================================================== 22:26:06 (1713493566) lfs swap_layouts: error: cannot swap layout between '/mnt/lustre/d273a.sanity/f273a.sanity_plain' and '/mnt/lustre/d273a.sanity/f273a.sanity_dom': Operation not supported (95) lfs swap_layouts: error: cannot swap layout between '/mnt/lustre/d273a.sanity/f273a.sanity_dom' and '/mnt/lustre/d273a.sanity/f273a.sanity_plain': Operation not supported (95) lfs swap_layouts: error: cannot swap layout between '/mnt/lustre/d273a.sanity/f273a.sanity_comp' and '/mnt/lustre/d273a.sanity/f273a.sanity_dom': Operation not supported (95) lfs swap_layouts: error: cannot swap layout between '/mnt/lustre/d273a.sanity/f273a.sanity_dom' and '/mnt/lustre/d273a.sanity/f273a.sanity_comp': Operation not supported (95) PASS 273a (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 273b: DoM: race writeback and object destroy ========================================================== 22:26:11 (1713493571) fail_loc=0x8000016b fail_val=2 PASS 273b (6s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 273c: race writeback and object destroy === 22:26:19 (1713493579) fail_loc=0x800001e1 fail_val=2 PASS 273c (5s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 275: Read on a canceled duplicate lock ==== 22:26:27 (1713493587) 2+0 records in 2+0 records out 2097152 bytes (2.1 MB) copied, 0.086855 s, 24.1 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.030803 s, 34.0 MB/s fail_loc=0x8000031f fail_loc=0x8000032b 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0372881 s, 28.1 MB/s PASS 275 (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 276: Race between mount and obd_statfs ==== 22:26:34 (1713493594) Stopping /mnt/lustre-ost1 (opts:) on oleg216-server Starting ost1: -o localrecov lustre-ost1/ost1 /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg216-server: oleg216-server.virtnet: executing set_default_debug all all pdsh@oleg216-client: oleg216-server: ssh exited with exit code 1 Started lustre-OST0000 Stopping /mnt/lustre-ost1 (opts:) on oleg216-server Starting ost1: -o localrecov lustre-ost1/ost1 /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg216-server: oleg216-server.virtnet: executing set_default_debug all all pdsh@oleg216-client: oleg216-server: ssh exited with exit code 1 Started lustre-OST0000 Stopping /mnt/lustre-ost1 (opts:) on oleg216-server Starting ost1: -o localrecov lustre-ost1/ost1 /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg216-server: oleg216-server.virtnet: executing set_default_debug all all pdsh@oleg216-client: oleg216-server: ssh exited with exit code 1 Started lustre-OST0000 Stopping /mnt/lustre-ost1 (opts:) on oleg216-server Starting ost1: -o localrecov lustre-ost1/ost1 /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg216-server: oleg216-server.virtnet: executing set_default_debug all all pdsh@oleg216-client: oleg216-server: ssh exited with exit code 1 Started lustre-OST0000 Stopping /mnt/lustre-ost1 (opts:) on oleg216-server Starting ost1: -o localrecov lustre-ost1/ost1 /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg216-server: oleg216-server.virtnet: executing set_default_debug all all pdsh@oleg216-client: oleg216-server: ssh exited with exit code 1 Started lustre-OST0000 Stopping /mnt/lustre-ost1 (opts:) on oleg216-server Starting ost1: -o localrecov lustre-ost1/ost1 /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg216-server: oleg216-server.virtnet: executing set_default_debug all all pdsh@oleg216-client: oleg216-server: ssh exited with exit code 1 Started lustre-OST0000 Stopping /mnt/lustre-ost1 (opts:) on oleg216-server Starting ost1: -o localrecov lustre-ost1/ost1 /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg216-server: oleg216-server.virtnet: executing set_default_debug all all pdsh@oleg216-client: oleg216-server: ssh exited with exit code 1 Started lustre-OST0000 Stopping /mnt/lustre-ost1 (opts:) on oleg216-server Starting ost1: -o localrecov lustre-ost1/ost1 /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg216-server: oleg216-server.virtnet: executing set_default_debug all all pdsh@oleg216-client: oleg216-server: ssh exited with exit code 1 Started lustre-OST0000 Stopping /mnt/lustre-ost1 (opts:) on oleg216-server Starting ost1: -o localrecov lustre-ost1/ost1 /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg216-server: oleg216-server.virtnet: executing set_default_debug all all pdsh@oleg216-client: oleg216-server: ssh exited with exit code 1 Started lustre-OST0000 Stopping /mnt/lustre-ost1 (opts:) on oleg216-server Starting ost1: -o localrecov lustre-ost1/ost1 /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg216-server: oleg216-server.virtnet: executing set_default_debug all all pdsh@oleg216-client: oleg216-server: ssh exited with exit code 1 Started lustre-OST0000 Stopping /mnt/lustre-ost1 (opts:) on oleg216-server Starting ost1: -o localrecov lustre-ost1/ost1 /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg216-server: oleg216-server.virtnet: executing set_default_debug all all pdsh@oleg216-client: oleg216-server: ssh exited with exit code 1 Started lustre-OST0000 Stopping /mnt/lustre-ost1 (opts:) on oleg216-server Starting ost1: -o localrecov lustre-ost1/ost1 /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg216-server: oleg216-server.virtnet: executing set_default_debug all all pdsh@oleg216-client: oleg216-server: ssh exited with exit code 1 Started lustre-OST0000 Stopping /mnt/lustre-ost1 (opts:) on oleg216-server Starting ost1: -o localrecov lustre-ost1/ost1 /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg216-server: oleg216-server.virtnet: executing set_default_debug all all pdsh@oleg216-client: oleg216-server: ssh exited with exit code 1 Started lustre-OST0000 Stopping /mnt/lustre-ost1 (opts:) on oleg216-server Starting ost1: -o localrecov lustre-ost1/ost1 /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg216-server: oleg216-server.virtnet: executing set_default_debug all all pdsh@oleg216-client: oleg216-server: ssh exited with exit code 1 Started lustre-OST0000 Stopping /mnt/lustre-ost1 (opts:) on oleg216-server Starting ost1: -o localrecov lustre-ost1/ost1 /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg216-server: oleg216-server.virtnet: executing set_default_debug all all pdsh@oleg216-client: oleg216-server: ssh exited with exit code 1 Started lustre-OST0000 Stopping /mnt/lustre-ost1 (opts:) on oleg216-server Starting ost1: -o localrecov lustre-ost1/ost1 /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg216-server: oleg216-server.virtnet: executing set_default_debug all all pdsh@oleg216-client: oleg216-server: ssh exited with exit code 1 Started lustre-OST0000 Stopping /mnt/lustre-ost1 (opts:) on oleg216-server Starting ost1: -o localrecov lustre-ost1/ost1 /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg216-server: oleg216-server.virtnet: executing set_default_debug all all pdsh@oleg216-client: oleg216-server: ssh exited with exit code 1 Started lustre-OST0000 Stopping /mnt/lustre-ost1 (opts:) on oleg216-server Starting ost1: -o localrecov lustre-ost1/ost1 /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg216-server: oleg216-server.virtnet: executing set_default_debug all all pdsh@oleg216-client: oleg216-server: ssh exited with exit code 1 Started lustre-OST0000 Stopping /mnt/lustre-ost1 (opts:) on oleg216-server Starting ost1: -o localrecov lustre-ost1/ost1 /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg216-server: oleg216-server.virtnet: executing set_default_debug all all pdsh@oleg216-client: oleg216-server: ssh exited with exit code 1 Started lustre-OST0000 Stopping /mnt/lustre-ost1 (opts:) on oleg216-server Starting ost1: -o localrecov lustre-ost1/ost1 /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg216-server: oleg216-server.virtnet: executing set_default_debug all all pdsh@oleg216-client: oleg216-server: ssh exited with exit code 1 Started lustre-OST0000 /home/green/git/lustre-release/lustre/tests/test-framework.sh: line 4538: 386 Killed do_facet ost1 "(while true; do $LCTL get_param obdfilter.*.filesfree > /dev/null 2>&1; done) & pid=\\\$!; echo \\\$pid > $TMP/sanity_276_pid" PASS 276 (120s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 277: Direct IO shall drop page cache ====== 22:28:37 (1713493717) ldlm.namespaces.MGC192.168.202.116@tcp.lru_size=0 ldlm.namespaces.lustre-MDT0000-mdc-ffff88012a52e000.lru_size=0 ldlm.namespaces.lustre-OST0000-osc-ffff88012a52e000.lru_size=0 ldlm.namespaces.lustre-OST0001-osc-ffff88012a52e000.lru_size=0 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0498348 s, 21.0 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.074814 s, 14.0 MB/s PASS 277 (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 278: Race starting MDS between MDTs stop/start ========================================================== 22:28:42 (1713493722) SKIP: sanity test_278 needs >= 2 MDTs SKIP 278 (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 280: Race between MGS umount and client llog processing ========================================================== 22:28:46 (1713493726) 192.168.202.116@tcp:/lustre /mnt/lustre lustre rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project 0 0 Stopping client oleg216-client.virtnet /mnt/lustre (opts:) fail_loc=0x8000015e fail_val=0 Starting client: oleg216-client.virtnet: -o user_xattr,flock oleg216-server@tcp:/lustre /mnt/lustre Stopping /mnt/lustre-mds1 (opts:) on oleg216-server Starting mgs: -o localrecov lustre-mdt1/mdt1 /mnt/lustre-mds1 mount.lustre: mount oleg216-server@tcp:/lustre at /mnt/lustre failed: Input/output error Is the MGS running? oleg216-server: oleg216-server.virtnet: executing set_default_debug all all pdsh@oleg216-client: oleg216-server: ssh exited with exit code 1 Started lustre-MDT0000 Starting client: oleg216-client.virtnet: -o user_xattr,flock oleg216-server@tcp:/lustre /mnt/lustre PASS 280 (27s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 300a: basic striped dir sanity test ======= 22:29:16 (1713493756) SKIP: sanity test_300a needs >= 2 MDTs SKIP 300a (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 300b: check ctime/mtime for striped dir === 22:29:20 (1713493760) SKIP: sanity test_300b needs >= 2 MDTs SKIP 300b (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 300c: chown && check ls under striped directory ========================================================== 22:29:23 (1713493763) SKIP: sanity test_300c needs >= 2 MDTs SKIP 300c (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 300d: check default stripe under striped directory ========================================================== 22:29:26 (1713493766) SKIP: sanity test_300d needs >= 2 MDTs SKIP 300d (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 300e: check rename under striped directory ========================================================== 22:29:30 (1713493770) SKIP: sanity test_300e needs >= 2 MDTs SKIP 300e (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 300f: check rename cross striped directory ========================================================== 22:29:33 (1713493773) SKIP: sanity test_300f needs >= 2 MDTs SKIP 300f (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 300g: check default striped directory for normal directory ========================================================== 22:29:37 (1713493777) SKIP: sanity test_300g needs >= 2 MDTs SKIP 300g (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 300h: check default striped directory for striped directory ========================================================== 22:29:40 (1713493780) SKIP: sanity test_300h needs >= 2 MDTs SKIP 300h (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 300i: client handle unknown hash type striped directory ========================================================== 22:29:44 (1713493784) SKIP: sanity test_300i needs >= 2 MDTs SKIP 300i (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 300j: test large update record ============ 22:29:48 (1713493788) SKIP: sanity test_300j needs >= 2 MDTs SKIP 300j (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 300k: test large striped directory ======== 22:29:52 (1713493792) SKIP: sanity test_300k needs >= 2 MDTs SKIP 300k (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 300l: non-root user to create dir under striped dir with stale layout ========================================================== 22:29:55 (1713493795) SKIP: sanity test_300l needs >= 2 MDTs SKIP 300l (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 300m: setstriped directory on single MDT FS ========================================================== 22:29:59 (1713493799) mkdir: cannot create directory '/mnt/lustre/d300m.sanity/striped_dir/c': No such device PASS 300m (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 300n: non-root user to create dir under striped dir with default EA ========================================================== 22:30:04 (1713493804) SKIP: sanity test_300n needs >= 2 MDTs SKIP 300n (1s) debug_raw_pointers=0 debug_raw_pointers=0 SKIP: sanity test_300o skipping SLOW test 300o debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 300p: create striped directory without space ========================================================== 22:30:09 (1713493809) SKIP: sanity test_300p needs >= 2 MDTs SKIP 300p (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 300q: create remote directory under orphan directory ========================================================== 22:30:12 (1713493812) SKIP: sanity test_300q needs >= 2 MDTs SKIP 300q (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 300r: test -1 striped directory =========== 22:30:16 (1713493816) SKIP: sanity test_300r needs >= 2 MDTs SKIP 300r (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 300s: test lfs mkdir -c without -i ======== 22:30:20 (1713493820) SKIP: sanity test_300s needs >= 2 MDTs SKIP 300s (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 300t: test max_mdt_stripecount ============ 22:30:23 (1713493823) SKIP: sanity test_300t needs at least 2 MDTs SKIP 300t (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 300ua: basic overstriped dir sanity test == 22:30:27 (1713493827) SKIP: sanity test_300ua needs >= 2 MDTs SKIP 300ua (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 300ub: test MDT overstriping interface & limits ========================================================== 22:30:31 (1713493831) SKIP: sanity test_300ub needs >= 2 MDTs SKIP 300ub (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 300uc: test MDT overstriping as default & inheritance ========================================================== 22:30:34 (1713493834) SKIP: sanity test_300uc needs >= 2 MDTs SKIP 300uc (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 300ud: dir split ========================== 22:30:38 (1713493838) SKIP: sanity test_300ud needs >= 2 MDTs SKIP 300ud (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 300ue: dir merge ========================== 22:30:42 (1713493842) SKIP: sanity test_300ue needs >= 2 MDTs SKIP 300ue (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 300uf: migrate with too many local locks == 22:30:45 (1713493845) SKIP: sanity test_300uf needs >= 2 MDTs SKIP 300uf (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 300ug: migrate overstriped dirs =========== 22:30:49 (1713493849) SKIP: sanity test_300ug needs >= 2 MDTs SKIP 300ug (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 310a: open unlink remote file ============= 22:30:53 (1713493853) SKIP: sanity test_310a needs >= 4 MDTs SKIP 310a (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 310b: unlink remote file with multiple links while open ========================================================== 22:30:56 (1713493856) SKIP: sanity test_310b needs >= 4 MDTs SKIP 310b (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 310c: open-unlink remote file with multiple links ========================================================== 22:31:00 (1713493860) SKIP: sanity test_310c needs >= 4 MDTs SKIP 310c (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 311: disable OSP precreate, and unlink should destroy objs ========================================================== 22:31:04 (1713493864) total: 1000 open/close in 3.94 seconds: 253.65 ops/second - unlinked 0 (time 1713493872 ; total 0 ; last 0) total: 1000 unlinks in 2 seconds: 500.000000 unlinks/second waited 5 sec, old Iused 12187, new Iused 11279 PASS 311 (17s) debug_raw_pointers=0 debug_raw_pointers=0 SKIP: sanity test_312 skipping ALWAYS excluded test 312 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 313: io should fail after last_rcvd update fail ========================================================== 22:31:24 (1713493884) fail_loc=0x720 dd: failed to open '/mnt/lustre/f313.sanity': Input/output error fail_loc=0 PASS 313 (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 314: OSP shouldn't fail after last_rcvd update failure ========================================================== 22:31:30 (1713493890) fail_loc=0x720 sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete fail_loc=0 PASS 314 (22s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 315: read should be accounted ============= 22:31:54 (1713493914) PASS 315 (7s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 316: lfs migrate of file with large_xattr enabled ========================================================== 22:32:03 (1713493923) SKIP: sanity test_316 needs >= 2 MDTs SKIP 316 (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 317: Verify blocks get correctly update after truncate ========================================================== 22:32:07 (1713493927) SKIP: sanity test_317 LU-10370: no implementation for ZFS SKIP 317 (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 318: Verify async readahead tunables ====== 22:32:11 (1713493931) llite.lustre-ffff88012c001800.max_read_ahead_async_active=256 llite.lustre-ffff88012c001800.max_read_ahead_async_active=0 llite.lustre-ffff88012c001800.max_read_ahead_async_active=512 llite.lustre-ffff88012c001800.max_read_ahead_async_active=2 error: set_param: setting /sys/fs/lustre/llite/lustre-ffff88012c001800/read_ahead_async_file_threshold_mb=65: Numerical result out of range error: set_param: setting 'llite/*/read_ahead_async_file_threshold_mb'='65': Numerical result out of range llite.lustre-ffff88012c001800.read_ahead_async_file_threshold_mb=64 llite.lustre-ffff88012c001800.read_ahead_async_file_threshold_mb=64 PASS 318 (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 319: lost lease lock on migrate error ===== 22:32:16 (1713493936) SKIP: sanity test_319 needs >= 2 MDTs SKIP 319 (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 350: force NID mismatch path to be exercised ========================================================== 22:32:20 (1713493940) fail_loc=0x1000e001 fail_val=100 /home/green/git/lustre-release/lustre/tests/sanity.sh: line 27406: 29928 Killed ls -lR $DIR/$tdir > /dev/null PASS 350 (107s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 360: ldiskfs unlink in a separate thread == 22:34:09 (1713494049) SKIP: sanity test_360 ldiskfs only test SKIP 360 (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 398a: direct IO should cancel lock otherwise lockless ========================================================== 22:34:13 (1713494053) ldlm.namespaces.MGC192.168.202.116@tcp.lru_size=clear ldlm.namespaces.lustre-MDT0000-mdc-ffff88012c001800.lru_size=clear ldlm.namespaces.lustre-OST0000-osc-ffff88012c001800.lru_size=clear ldlm.namespaces.lustre-OST0001-osc-ffff88012c001800.lru_size=clear 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0445414 s, 23.5 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.109577 s, 9.6 MB/s ldlm.namespaces.lustre-OST0000-osc-ffff88012c001800.lru_size=clear 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0647769 s, 16.2 MB/s ldlm.namespaces.lustre-OST0000-osc-ffff88012c001800.lru_size=clear 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.067321 s, 15.6 MB/s PASS 398a (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 398b: DIO and buffer IO race ============== 22:34:19 (1713494059) /usr/bin/fio 48+0 records in 48+0 records out 50331648 bytes (50 MB) copied, 1.44162 s, 34.9 MB/s mix direct rw 4096 by fio with 4 jobs... mix buffer rw 4096 by fio with 4 jobs... rand-rw: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=16 ... fio-3.7 Starting 4 processes rand-rw: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=16 ... fio-3.7 Starting 4 processes rand-rw: (groupid=0, jobs=1): err= 0: pid=31725: Thu Apr 18 22:34:34 2024 read: IOPS=119, BW=477KiB/s (489kB/s)(5992KiB/12557msec) clat (usec): min=45, max=184317, avg=3896.18, stdev=5660.53 lat (usec): min=45, max=184318, avg=3896.76, stdev=5660.55 clat percentiles (usec): | 1.00th=[ 51], 5.00th=[ 69], 10.00th=[ 74], 20.00th=[ 120], | 30.00th=[ 2278], 40.00th=[ 2671], 50.00th=[ 3097], 60.00th=[ 3785], | 70.00th=[ 4883], 80.00th=[ 6063], 90.00th=[ 7963], 95.00th=[ 9503], | 99.00th=[ 13829], 99.50th=[ 15664], 99.90th=[ 39060], 99.95th=[183501], | 99.99th=[183501] bw ( KiB/s): min= 200, max= 694, per=24.79%, avg=474.04, stdev=110.90, samples=25 iops : min= 50, max= 173, avg=118.40, stdev=27.72, samples=25 write: IOPS=125, BW=501KiB/s (513kB/s)(6296KiB/12557msec) clat (usec): min=464, max=39382, avg=4198.38, stdev=3946.39 lat (usec): min=464, max=39383, avg=4200.40, stdev=3946.20 clat percentiles (usec): | 1.00th=[ 494], 5.00th=[ 529], 10.00th=[ 553], 20.00th=[ 668], | 30.00th=[ 988], 40.00th=[ 2343], 50.00th=[ 3458], 60.00th=[ 4359], | 70.00th=[ 5407], 80.00th=[ 6849], 90.00th=[ 9110], 95.00th=[11600], | 99.00th=[17695], 99.50th=[21365], 99.90th=[31327], 99.95th=[39584], | 99.99th=[39584] bw ( KiB/s): min= 224, max= 728, per=24.84%, avg=497.36, stdev=124.42, samples=25 iops : min= 56, max= 182, avg=124.24, stdev=31.11, samples=25 lat (usec) : 50=0.36%, 100=8.85%, 250=0.85%, 500=0.91%, 750=11.95% lat (usec) : 1000=2.83% lat (msec) : 2=5.76%, 4=27.44%, 10=35.19%, 20=5.44%, 50=0.39% lat (msec) : 250=0.03% cpu : usr=0.25%, sys=24.47%, ctx=5996, majf=0, minf=34 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=1498,1574,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=16 rand-rw: (groupid=0, jobs=1): err= 0: pid=31726: Thu Apr 18 22:34:34 2024 read: IOPS=118, BW=475KiB/s (487kB/s)(5940KiB/12496msec) clat (usec): min=46, max=32512, avg=3624.97, stdev=3115.38 lat (usec): min=47, max=32513, avg=3626.28, stdev=3115.63 clat percentiles (usec): | 1.00th=[ 53], 5.00th=[ 70], 10.00th=[ 74], 20.00th=[ 117], | 30.00th=[ 2278], 40.00th=[ 2704], 50.00th=[ 3097], 60.00th=[ 3654], | 70.00th=[ 4424], 80.00th=[ 5800], 90.00th=[ 7570], 95.00th=[ 8848], | 99.00th=[13042], 99.50th=[16450], 99.90th=[28705], 99.95th=[32637], | 99.99th=[32637] bw ( KiB/s): min= 240, max= 688, per=24.84%, avg=474.87, stdev=111.86, samples=24 iops : min= 60, max= 172, avg=118.67, stdev=27.93, samples=24 write: IOPS=127, BW=508KiB/s (520kB/s)(6348KiB/12496msec) clat (usec): min=462, max=181458, avg=4412.07, stdev=6082.95 lat (usec): min=463, max=181459, avg=4413.15, stdev=6083.04 clat percentiles (usec): | 1.00th=[ 498], 5.00th=[ 529], 10.00th=[ 562], 20.00th=[ 676], | 30.00th=[ 955], 40.00th=[ 2180], 50.00th=[ 3458], 60.00th=[ 4359], | 70.00th=[ 5604], 80.00th=[ 7308], 90.00th=[ 9372], 95.00th=[ 12256], | 99.00th=[ 17433], 99.50th=[ 21890], 99.90th=[ 42206], 99.95th=[181404], | 99.99th=[181404] bw ( KiB/s): min= 288, max= 742, per=24.95%, avg=499.54, stdev=106.19, samples=24 iops : min= 72, max= 185, avg=124.83, stdev=26.51, samples=24 lat (usec) : 50=0.23%, 100=8.92%, 250=1.24%, 500=0.75%, 750=11.59% lat (usec) : 1000=3.91% lat (msec) : 2=5.37%, 4=28.35%, 10=33.82%, 20=5.31%, 50=0.49% lat (msec) : 250=0.03% cpu : usr=0.21%, sys=24.60%, ctx=5974, majf=0, minf=34 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=1485,1587,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=16 rand-rw: (groupid=0, jobs=1): err= 0: pid=31727: Thu Apr 18 22:34:34 2024 read: IOPS=120, BW=481KiB/s (492kB/s)(6028KiB/12536msec) clat (usec): min=46, max=179606, avg=3803.71, stdev=5642.67 lat (usec): min=47, max=179607, avg=3804.39, stdev=5642.65 clat percentiles (usec): | 1.00th=[ 50], 5.00th=[ 69], 10.00th=[ 73], 20.00th=[ 99], | 30.00th=[ 2212], 40.00th=[ 2606], 50.00th=[ 3064], 60.00th=[ 3720], | 70.00th=[ 4621], 80.00th=[ 5932], 90.00th=[ 7767], 95.00th=[ 9372], | 99.00th=[ 13435], 99.50th=[ 20841], 99.90th=[ 44303], 99.95th=[179307], | 99.99th=[179307] bw ( KiB/s): min= 191, max= 688, per=25.12%, avg=480.20, stdev=112.32, samples=25 iops : min= 47, max= 172, avg=120.00, stdev=28.15, samples=25 write: IOPS=124, BW=499KiB/s (511kB/s)(6260KiB/12536msec) clat (usec): min=459, max=44398, avg=4276.94, stdev=4181.49 lat (usec): min=460, max=44399, avg=4278.17, stdev=4181.68 clat percentiles (usec): | 1.00th=[ 494], 5.00th=[ 523], 10.00th=[ 545], 20.00th=[ 635], | 30.00th=[ 930], 40.00th=[ 2245], 50.00th=[ 3523], 60.00th=[ 4555], | 70.00th=[ 5604], 80.00th=[ 7177], 90.00th=[ 9503], 95.00th=[11469], | 99.00th=[17433], 99.50th=[19792], 99.90th=[38536], 99.95th=[44303], | 99.99th=[44303] bw ( KiB/s): min= 271, max= 728, per=24.88%, avg=498.12, stdev=112.85, samples=25 iops : min= 67, max= 182, avg=124.48, stdev=28.27, samples=25 lat (usec) : 50=0.72%, 100=9.18%, 250=1.46%, 500=0.78%, 750=12.14% lat (usec) : 1000=2.86% lat (msec) : 2=5.50%, 4=26.86%, 10=34.24%, 20=5.76%, 50=0.46% lat (msec) : 250=0.03% cpu : usr=0.18%, sys=24.76%, ctx=5991, majf=0, minf=34 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=1507,1565,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=16 rand-rw: (groupid=0, jobs=1): err= 0: pid=31728: Thu Apr 18 22:34:34 2024 read: IOPS=121, BW=486KiB/s (498kB/s)(6052KiB/12446msec) clat (usec): min=48, max=190012, avg=3780.97, stdev=5716.20 lat (usec): min=48, max=190013, avg=3781.80, stdev=5716.26 clat percentiles (usec): | 1.00th=[ 52], 5.00th=[ 69], 10.00th=[ 73], 20.00th=[ 100], | 30.00th=[ 2245], 40.00th=[ 2638], 50.00th=[ 3064], 60.00th=[ 3720], | 70.00th=[ 4686], 80.00th=[ 5932], 90.00th=[ 7767], 95.00th=[ 9241], | 99.00th=[ 12649], 99.50th=[ 16188], 99.90th=[ 29492], 99.95th=[189793], | 99.99th=[189793] bw ( KiB/s): min= 224, max= 696, per=25.01%, avg=478.25, stdev=96.48, samples=24 iops : min= 56, max= 174, avg=119.54, stdev=24.11, samples=24 write: IOPS=125, BW=501KiB/s (513kB/s)(6236KiB/12446msec) clat (usec): min=435, max=31753, avg=4244.04, stdev=4162.69 lat (usec): min=436, max=31753, avg=4244.73, stdev=4162.70 clat percentiles (usec): | 1.00th=[ 494], 5.00th=[ 523], 10.00th=[ 545], 20.00th=[ 627], | 30.00th=[ 840], 40.00th=[ 1795], 50.00th=[ 3326], 60.00th=[ 4359], | 70.00th=[ 5473], 80.00th=[ 7177], 90.00th=[ 9896], 95.00th=[12387], | 99.00th=[18482], 99.50th=[20317], 99.90th=[27657], 99.95th=[31851], | 99.99th=[31851] bw ( KiB/s): min= 272, max= 792, per=24.72%, avg=494.92, stdev=107.44, samples=24 iops : min= 68, max= 198, avg=123.71, stdev=26.85, samples=24 lat (usec) : 50=0.33%, 100=9.60%, 250=1.11%, 500=0.81%, 750=13.09% lat (usec) : 1000=2.83% lat (msec) : 2=5.73%, 4=26.60%, 10=33.53%, 20=5.96%, 50=0.39% lat (msec) : 250=0.03% cpu : usr=0.17%, sys=24.61%, ctx=5980, majf=0, minf=32 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=1513,1559,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=16 Run status group 0 (all jobs): READ: bw=1912KiB/s (1958kB/s), 475KiB/s-486KiB/s (487kB/s-498kB/s), io=23.4MiB (24.6MB), run=12446-12557msec WRITE: bw=2002KiB/s (2050kB/s), 499KiB/s-508KiB/s (511kB/s-520kB/s), io=24.6MiB (25.7MB), run=12446-12557msec rand-rw: (groupid=0, jobs=1): err= 0: pid=31721: Thu Apr 18 22:35:02 2024 read: IOPS=37, BW=151KiB/s (154kB/s)(5992KiB/39803msec) clat (usec): min=1631, max=187239, avg=3603.68, stdev=5957.30 lat (usec): min=1631, max=187241, avg=3604.29, stdev=5957.33 clat percentiles (usec): | 1.00th=[ 1762], 5.00th=[ 1844], 10.00th=[ 1909], 20.00th=[ 1991], | 30.00th=[ 2089], 40.00th=[ 2180], 50.00th=[ 2311], 60.00th=[ 2474], | 70.00th=[ 2704], 80.00th=[ 3458], 90.00th=[ 7111], 95.00th=[ 9634], | 99.00th=[ 18220], 99.50th=[ 21890], 99.90th=[ 60556], 99.95th=[187696], | 99.99th=[187696] bw ( KiB/s): min= 40, max= 272, per=25.05%, avg=150.30, stdev=59.22, samples=79 iops : min= 10, max= 68, avg=37.52, stdev=14.83, samples=79 write: IOPS=39, BW=158KiB/s (162kB/s)(6296KiB/39803msec) clat (usec): min=11955, max=56188, avg=21787.08, stdev=7457.39 lat (usec): min=11955, max=56189, avg=21787.87, stdev=7457.36 clat percentiles (usec): | 1.00th=[12780], 5.00th=[13304], 10.00th=[13960], 20.00th=[15270], | 30.00th=[17171], 40.00th=[19268], 50.00th=[20841], 60.00th=[22152], | 70.00th=[23462], 80.00th=[25297], 90.00th=[31589], 95.00th=[37487], | 99.00th=[47449], 99.50th=[51119], 99.90th=[55313], 99.95th=[56361], | 99.99th=[56361] bw ( KiB/s): min= 48, max= 208, per=25.14%, avg=157.90, stdev=42.86, samples=79 iops : min= 12, max= 52, avg=39.42, stdev=10.71, samples=79 lat (msec) : 2=9.83%, 4=30.63%, 10=6.02%, 20=24.97%, 50=28.22% lat (msec) : 100=0.29%, 250=0.03% cpu : usr=0.09%, sys=4.41%, ctx=3310, majf=0, minf=34 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=1498,1574,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=16 rand-rw: (groupid=0, jobs=1): err= 0: pid=31722: Thu Apr 18 22:35:02 2024 read: IOPS=37, BW=148KiB/s (152kB/s)(5940KiB/40020msec) clat (usec): min=1647, max=181799, avg=3463.64, stdev=5558.74 lat (usec): min=1647, max=181801, avg=3464.28, stdev=5558.75 clat percentiles (usec): | 1.00th=[ 1745], 5.00th=[ 1844], 10.00th=[ 1909], 20.00th=[ 1991], | 30.00th=[ 2073], 40.00th=[ 2180], 50.00th=[ 2311], 60.00th=[ 2474], | 70.00th=[ 2704], 80.00th=[ 3294], 90.00th=[ 6521], 95.00th=[ 9372], | 99.00th=[ 16188], 99.50th=[ 23200], 99.90th=[ 38536], 99.95th=[181404], | 99.99th=[181404] bw ( KiB/s): min= 15, max= 280, per=24.71%, avg=148.28, stdev=61.60, samples=80 iops : min= 3, max= 70, avg=36.98, stdev=15.45, samples=80 write: IOPS=39, BW=159KiB/s (162kB/s)(6348KiB/40020msec) clat (usec): min=11877, max=67406, avg=21905.64, stdev=7482.10 lat (usec): min=11878, max=67407, avg=21906.41, stdev=7482.14 clat percentiles (usec): | 1.00th=[12649], 5.00th=[13435], 10.00th=[14091], 20.00th=[15401], | 30.00th=[17433], 40.00th=[19530], 50.00th=[21103], 60.00th=[22414], | 70.00th=[23725], 80.00th=[25297], 90.00th=[31851], 95.00th=[36963], | 99.00th=[49021], 99.50th=[51643], 99.90th=[57410], 99.95th=[67634], | 99.99th=[67634] bw ( KiB/s): min= 55, max= 200, per=25.22%, avg=158.37, stdev=42.06, samples=80 iops : min= 13, max= 50, avg=39.50, stdev=10.60, samples=80 lat (msec) : 2=10.16%, 4=30.53%, 10=5.73%, 20=23.37%, 50=29.79% lat (msec) : 100=0.39%, 250=0.03% cpu : usr=0.08%, sys=4.42%, ctx=3330, majf=0, minf=32 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=1485,1587,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=16 rand-rw: (groupid=0, jobs=1): err= 0: pid=31723: Thu Apr 18 22:35:02 2024 read: IOPS=37, BW=151KiB/s (155kB/s)(6028KiB/39883msec) clat (usec): min=1640, max=60511, avg=3585.21, stdev=3754.08 lat (usec): min=1641, max=60512, avg=3585.88, stdev=3754.10 clat percentiles (usec): | 1.00th=[ 1762], 5.00th=[ 1860], 10.00th=[ 1909], 20.00th=[ 2008], | 30.00th=[ 2089], 40.00th=[ 2212], 50.00th=[ 2311], 60.00th=[ 2474], | 70.00th=[ 2737], 80.00th=[ 3490], 90.00th=[ 7439], 95.00th=[10290], | 99.00th=[20579], 99.50th=[25035], 99.90th=[36963], 99.95th=[60556], | 99.99th=[60556] bw ( KiB/s): min= 48, max= 280, per=25.20%, avg=151.23, stdev=55.80, samples=79 iops : min= 12, max= 70, avg=37.76, stdev=13.94, samples=79 write: IOPS=39, BW=157KiB/s (161kB/s)(6260KiB/39883msec) clat (msec): min=11, max=201, avg=21.96, stdev= 8.69 lat (msec): min=11, max=201, avg=21.96, stdev= 8.69 clat percentiles (msec): | 1.00th=[ 13], 5.00th=[ 14], 10.00th=[ 15], 20.00th=[ 16], | 30.00th=[ 18], 40.00th=[ 20], 50.00th=[ 22], 60.00th=[ 23], | 70.00th=[ 24], 80.00th=[ 26], 90.00th=[ 33], 95.00th=[ 39], | 99.00th=[ 48], 99.50th=[ 52], 99.90th=[ 72], 99.95th=[ 203], | 99.99th=[ 203] bw ( KiB/s): min= 56, max= 208, per=24.97%, avg=156.80, stdev=43.43, samples=79 iops : min= 14, max= 52, avg=39.15, stdev=10.86, samples=79 lat (msec) : 2=9.60%, 4=30.47%, 10=6.22%, 20=24.35%, 50=29.07% lat (msec) : 100=0.26%, 250=0.03% cpu : usr=0.09%, sys=4.50%, ctx=3315, majf=0, minf=34 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=1507,1565,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=16 rand-rw: (groupid=0, jobs=1): err= 0: pid=31724: Thu Apr 18 22:35:02 2024 read: IOPS=37, BW=151KiB/s (155kB/s)(6052KiB/39964msec) clat (usec): min=1588, max=46957, avg=3466.46, stdev=3603.05 lat (usec): min=1589, max=46958, avg=3467.14, stdev=3603.07 clat percentiles (usec): | 1.00th=[ 1762], 5.00th=[ 1844], 10.00th=[ 1893], 20.00th=[ 1991], | 30.00th=[ 2089], 40.00th=[ 2180], 50.00th=[ 2278], 60.00th=[ 2442], | 70.00th=[ 2737], 80.00th=[ 3392], 90.00th=[ 6849], 95.00th=[ 9634], | 99.00th=[17433], 99.50th=[23200], 99.90th=[45876], 99.95th=[46924], | 99.99th=[46924] bw ( KiB/s): min= 32, max= 304, per=25.10%, avg=150.61, stdev=59.90, samples=79 iops : min= 8, max= 76, avg=37.59, stdev=15.00, samples=79 write: IOPS=39, BW=156KiB/s (160kB/s)(6236KiB/39964msec) clat (msec): min=11, max=199, avg=22.20, stdev= 8.69 lat (msec): min=11, max=199, avg=22.20, stdev= 8.69 clat percentiles (msec): | 1.00th=[ 13], 5.00th=[ 14], 10.00th=[ 15], 20.00th=[ 16], | 30.00th=[ 18], 40.00th=[ 20], 50.00th=[ 22], 60.00th=[ 23], | 70.00th=[ 24], 80.00th=[ 26], 90.00th=[ 33], 95.00th=[ 39], | 99.00th=[ 47], 99.50th=[ 54], 99.90th=[ 63], 99.95th=[ 199], | 99.99th=[ 199] bw ( KiB/s): min= 56, max= 208, per=24.80%, avg=155.77, stdev=42.32, samples=79 iops : min= 14, max= 52, avg=38.89, stdev=10.59, samples=79 lat (msec) : 2=10.51%, 4=30.73%, 10=5.66%, 20=23.40%, 50=29.36% lat (msec) : 100=0.29%, 250=0.03% cpu : usr=0.12%, sys=4.36%, ctx=3346, majf=0, minf=31 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=1513,1559,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=16 Run status group 0 (all jobs): READ: bw=600KiB/s (614kB/s), 148KiB/s-151KiB/s (152kB/s-155kB/s), io=23.4MiB (24.6MB), run=39803-40020msec WRITE: bw=628KiB/s (643kB/s), 156KiB/s-159KiB/s (160kB/s-162kB/s), io=24.6MiB (25.7MB), run=39803-40020msec mix direct rw 16384 by fio with 4 jobs... mix buffer rw 16384 by fio with 4 jobs... rand-rw: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=psync, iodepth=16 ... fio-3.7 Starting 4 processes rand-rw: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=psync, iodepth=16 ... fio-3.7 Starting 4 processes rand-rw: (groupid=0, jobs=1): err= 0: pid=31771: Thu Apr 18 22:35:07 2024 read: IOPS=87, BW=1404KiB/s (1438kB/s)(5920KiB/4217msec) clat (usec): min=51, max=29636, avg=8216.03, stdev=5241.57 lat (usec): min=52, max=29637, avg=8218.84, stdev=5241.62 clat percentiles (usec): | 1.00th=[ 55], 5.00th=[ 64], 10.00th=[ 74], 20.00th=[ 100], | 30.00th=[ 7701], 40.00th=[ 8717], 50.00th=[ 9372], 60.00th=[10290], | 70.00th=[10814], 80.00th=[11731], 90.00th=[12911], 95.00th=[14353], | 99.00th=[23725], 99.50th=[28181], 99.90th=[29754], 99.95th=[29754], | 99.99th=[29754] bw ( KiB/s): min= 1120, max= 1600, per=23.86%, avg=1335.62, stdev=156.78, samples=8 iops : min= 70, max= 100, avg=83.38, stdev= 9.87, samples=8 write: IOPS=94, BW=1510KiB/s (1546kB/s)(6368KiB/4217msec) clat (usec): min=669, max=19918, avg=2940.87, stdev=2967.57 lat (usec): min=670, max=19919, avg=2941.71, stdev=2967.58 clat percentiles (usec): | 1.00th=[ 685], 5.00th=[ 734], 10.00th=[ 766], 20.00th=[ 799], | 30.00th=[ 840], 40.00th=[ 938], 50.00th=[ 2180], 60.00th=[ 2802], | 70.00th=[ 3392], 80.00th=[ 4948], 90.00th=[ 6456], 95.00th=[ 8848], | 99.00th=[15664], 99.50th=[17957], 99.90th=[19792], 99.95th=[19792], | 99.99th=[19792] bw ( KiB/s): min= 1056, max= 1756, per=24.85%, avg=1463.50, stdev=231.14, samples=8 iops : min= 66, max= 109, avg=91.38, stdev=14.31, samples=8 lat (usec) : 100=9.64%, 250=1.04%, 500=0.39%, 750=3.91%, 1000=18.23% lat (msec) : 2=3.78%, 4=13.54%, 10=26.56%, 20=22.27%, 50=0.65% cpu : usr=0.12%, sys=22.22%, ctx=1960, majf=0, minf=34 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=370,398,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=16 rand-rw: (groupid=0, jobs=1): err= 0: pid=31772: Thu Apr 18 22:35:07 2024 read: IOPS=85, BW=1369KiB/s (1402kB/s)(5744KiB/4195msec) clat (usec): min=51, max=19198, avg=8331.57, stdev=4963.03 lat (usec): min=51, max=19199, avg=8332.15, stdev=4963.06 clat percentiles (usec): | 1.00th=[ 54], 5.00th=[ 72], 10.00th=[ 75], 20.00th=[ 130], | 30.00th=[ 7635], 40.00th=[ 8717], 50.00th=[ 9634], 60.00th=[10552], | 70.00th=[11207], 80.00th=[11994], 90.00th=[13698], 95.00th=[14877], | 99.00th=[17171], 99.50th=[19268], 99.90th=[19268], 99.95th=[19268], | 99.99th=[19268] bw ( KiB/s): min= 992, max= 1792, per=24.29%, avg=1359.62, stdev=260.51, samples=8 iops : min= 62, max= 112, avg=84.88, stdev=16.28, samples=8 write: IOPS=97, BW=1560KiB/s (1597kB/s)(6544KiB/4195msec) clat (usec): min=647, max=54239, avg=2930.16, stdev=4047.37 lat (usec): min=648, max=54240, avg=2931.04, stdev=4047.45 clat percentiles (usec): | 1.00th=[ 676], 5.00th=[ 709], 10.00th=[ 742], 20.00th=[ 799], | 30.00th=[ 840], 40.00th=[ 906], 50.00th=[ 1352], 60.00th=[ 2507], | 70.00th=[ 3195], 80.00th=[ 4113], 90.00th=[ 6390], 95.00th=[ 9241], | 99.00th=[16581], 99.50th=[21365], 99.90th=[54264], 99.95th=[54264], | 99.99th=[54264] bw ( KiB/s): min= 1152, max= 2304, per=25.94%, avg=1527.62, stdev=376.66, samples=8 iops : min= 72, max= 144, avg=95.38, stdev=23.56, samples=8 lat (usec) : 100=8.59%, 250=1.17%, 500=0.13%, 750=5.86%, 1000=18.62% lat (msec) : 2=4.95%, 4=13.54%, 10=23.05%, 20=23.70%, 50=0.26% lat (msec) : 100=0.13% cpu : usr=0.17%, sys=21.89%, ctx=1909, majf=0, minf=35 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=359,409,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=16 rand-rw: (groupid=0, jobs=1): err= 0: pid=31773: Thu Apr 18 22:35:07 2024 read: IOPS=91, BW=1462KiB/s (1497kB/s)(6256KiB/4278msec) clat (usec): min=50, max=39151, avg=8221.09, stdev=5422.80 lat (usec): min=50, max=39152, avg=8221.63, stdev=5422.80 clat percentiles (usec): | 1.00th=[ 53], 5.00th=[ 67], 10.00th=[ 74], 20.00th=[ 91], | 30.00th=[ 7570], 40.00th=[ 8717], 50.00th=[ 9634], 60.00th=[10421], | 70.00th=[11076], 80.00th=[11863], 90.00th=[13173], 95.00th=[15401], | 99.00th=[20841], 99.50th=[24773], 99.90th=[39060], 99.95th=[39060], | 99.99th=[39060] bw ( KiB/s): min= 1088, max= 1984, per=25.43%, avg=1423.62, stdev=317.69, samples=8 iops : min= 68, max= 124, avg=88.87, stdev=19.85, samples=8 write: IOPS=88, BW=1410KiB/s (1444kB/s)(6032KiB/4278msec) clat (usec): min=654, max=47528, avg=2804.73, stdev=3676.79 lat (usec): min=655, max=47528, avg=2805.73, stdev=3676.79 clat percentiles (usec): | 1.00th=[ 685], 5.00th=[ 709], 10.00th=[ 734], 20.00th=[ 791], | 30.00th=[ 824], 40.00th=[ 881], 50.00th=[ 1287], 60.00th=[ 2507], | 70.00th=[ 3294], 80.00th=[ 4359], 90.00th=[ 5932], 95.00th=[ 7570], | 99.00th=[17695], 99.50th=[18744], 99.90th=[47449], 99.95th=[47449], | 99.99th=[47449] bw ( KiB/s): min= 960, max= 1952, per=23.02%, avg=1355.62, stdev=329.21, samples=8 iops : min= 60, max= 122, avg=84.62, stdev=20.64, samples=8 lat (usec) : 100=10.81%, 250=1.43%, 500=0.39%, 750=6.38%, 1000=16.54% lat (msec) : 2=4.17%, 4=10.68%, 10=25.39%, 20=23.57%, 50=0.65% cpu : usr=0.12%, sys=21.74%, ctx=1901, majf=0, minf=35 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=391,377,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=16 rand-rw: (groupid=0, jobs=1): err= 0: pid=31774: Thu Apr 18 22:35:07 2024 read: IOPS=88, BW=1416KiB/s (1450kB/s)(6032KiB/4259msec) clat (usec): min=48, max=32919, avg=8427.41, stdev=5304.13 lat (usec): min=48, max=32919, avg=8428.06, stdev=5304.23 clat percentiles (usec): | 1.00th=[ 52], 5.00th=[ 72], 10.00th=[ 74], 20.00th=[ 88], | 30.00th=[ 7701], 40.00th=[ 8717], 50.00th=[ 9765], 60.00th=[10421], | 70.00th=[10945], 80.00th=[11994], 90.00th=[13566], 95.00th=[15401], | 99.00th=[21627], 99.50th=[27657], 99.90th=[32900], 99.95th=[32900], | 99.99th=[32900] bw ( KiB/s): min= 1088, max= 1568, per=24.43%, avg=1367.63, stdev=163.18, samples=8 iops : min= 68, max= 98, avg=85.38, stdev=10.27, samples=8 write: IOPS=91, BW=1469KiB/s (1504kB/s)(6256KiB/4259msec) clat (usec): min=667, max=49070, avg=2752.62, stdev=3730.14 lat (usec): min=668, max=49071, avg=2753.93, stdev=3729.91 clat percentiles (usec): | 1.00th=[ 676], 5.00th=[ 725], 10.00th=[ 750], 20.00th=[ 791], | 30.00th=[ 832], 40.00th=[ 898], 50.00th=[ 1467], 60.00th=[ 2507], | 70.00th=[ 2933], 80.00th=[ 4228], 90.00th=[ 5800], 95.00th=[ 6980], | 99.00th=[18220], 99.50th=[30802], 99.90th=[49021], 99.95th=[49021], | 99.99th=[49021] bw ( KiB/s): min= 1088, max= 2016, per=24.44%, avg=1439.62, stdev=290.33, samples=8 iops : min= 68, max= 126, avg=89.88, stdev=18.16, samples=8 lat (usec) : 50=0.26%, 100=10.29%, 250=0.26%, 500=0.13%, 750=5.08% lat (usec) : 1000=17.84% lat (msec) : 2=3.65%, 4=14.19%, 10=24.74%, 20=22.79%, 50=0.78% cpu : usr=0.16%, sys=21.58%, ctx=1924, majf=0, minf=34 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=377,391,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=16 Run status group 0 (all jobs): READ: bw=5599KiB/s (5733kB/s), 1369KiB/s-1462KiB/s (1402kB/s-1497kB/s), io=23.4MiB (24.5MB), run=4195-4278msec WRITE: bw=5891KiB/s (6032kB/s), 1410KiB/s-1560KiB/s (1444kB/s-1597kB/s), io=24.6MiB (25.8MB), run=4195-4278msec rand-rw: (groupid=0, jobs=1): err= 0: pid=31775: Thu Apr 18 22:35:13 2024 read: IOPS=37, BW=598KiB/s (612kB/s)(5920KiB/9907msec) clat (usec): min=1827, max=35433, avg=3824.42, stdev=3533.50 lat (usec): min=1827, max=35433, avg=3825.01, stdev=3533.51 clat percentiles (usec): | 1.00th=[ 1909], 5.00th=[ 2057], 10.00th=[ 2114], 20.00th=[ 2180], | 30.00th=[ 2278], 40.00th=[ 2343], 50.00th=[ 2507], 60.00th=[ 2769], | 70.00th=[ 3097], 80.00th=[ 4228], 90.00th=[ 7046], 95.00th=[10683], | 99.00th=[18482], 99.50th=[25297], 99.90th=[35390], 99.95th=[35390], | 99.99th=[35390] bw ( KiB/s): min= 288, max= 864, per=24.60%, avg=587.47, stdev=155.96, samples=19 iops : min= 18, max= 54, avg=36.53, stdev= 9.91, samples=19 write: IOPS=40, BW=643KiB/s (658kB/s)(6368KiB/9907msec) clat (usec): min=11588, max=48846, avg=21319.79, stdev=6398.22 lat (usec): min=11589, max=48847, avg=21320.80, stdev=6398.25 clat percentiles (usec): | 1.00th=[12387], 5.00th=[13173], 10.00th=[13829], 20.00th=[15533], | 30.00th=[17171], 40.00th=[19006], 50.00th=[20841], 60.00th=[22152], | 70.00th=[23462], 80.00th=[25560], 90.00th=[29754], 95.00th=[33162], | 99.00th=[42730], 99.50th=[49021], 99.90th=[49021], 99.95th=[49021], | 99.99th=[49021] bw ( KiB/s): min= 384, max= 800, per=25.27%, avg=634.68, stdev=152.89, samples=19 iops : min= 24, max= 50, avg=39.47, stdev= 9.68, samples=19 lat (msec) : 2=1.69%, 4=36.46%, 10=7.03%, 20=26.04%, 50=28.78% cpu : usr=0.08%, sys=5.08%, ctx=854, majf=0, minf=31 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=370,398,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=16 rand-rw: (groupid=0, jobs=1): err= 0: pid=31776: Thu Apr 18 22:35:13 2024 read: IOPS=35, BW=573KiB/s (586kB/s)(5744KiB/10029msec) clat (usec): min=1823, max=18528, avg=3613.04, stdev=2939.19 lat (usec): min=1824, max=18528, avg=3613.69, stdev=2939.17 clat percentiles (usec): | 1.00th=[ 1958], 5.00th=[ 2024], 10.00th=[ 2089], 20.00th=[ 2180], | 30.00th=[ 2245], 40.00th=[ 2311], 50.00th=[ 2376], 60.00th=[ 2606], | 70.00th=[ 2933], 80.00th=[ 4047], 90.00th=[ 7177], 95.00th=[10814], | 99.00th=[16712], 99.50th=[17695], 99.90th=[18482], 99.95th=[18482], | 99.99th=[18482] bw ( KiB/s): min= 192, max= 960, per=24.05%, avg=574.40, stdev=216.66, samples=20 iops : min= 12, max= 60, avg=35.90, stdev=13.54, samples=20 write: IOPS=40, BW=653KiB/s (668kB/s)(6544KiB/10029msec) clat (usec): min=11578, max=85499, avg=21328.84, stdev=7754.34 lat (usec): min=11579, max=85500, avg=21329.87, stdev=7754.36 clat percentiles (usec): | 1.00th=[12256], 5.00th=[13042], 10.00th=[13566], 20.00th=[14877], | 30.00th=[16909], 40.00th=[18220], 50.00th=[20055], 60.00th=[21890], | 70.00th=[23200], 80.00th=[26084], 90.00th=[31065], 95.00th=[33817], | 99.00th=[46924], 99.50th=[48497], 99.90th=[85459], 99.95th=[85459], | 99.99th=[85459] bw ( KiB/s): min= 416, max= 896, per=25.92%, avg=651.20, stdev=161.93, samples=20 iops : min= 26, max= 56, avg=40.70, stdev=10.12, samples=20 lat (msec) : 2=1.04%, 4=36.33%, 10=6.51%, 20=29.30%, 50=26.56% lat (msec) : 100=0.26% cpu : usr=0.07%, sys=5.08%, ctx=861, majf=0, minf=32 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=359,409,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=16 rand-rw: (groupid=0, jobs=1): err= 0: pid=31777: Thu Apr 18 22:35:13 2024 read: IOPS=40, BW=651KiB/s (667kB/s)(6256KiB/9609msec) clat (usec): min=1957, max=25335, avg=3606.68, stdev=2940.95 lat (usec): min=1958, max=25335, avg=3607.26, stdev=2940.95 clat percentiles (usec): | 1.00th=[ 1958], 5.00th=[ 2040], 10.00th=[ 2089], 20.00th=[ 2180], | 30.00th=[ 2245], 40.00th=[ 2376], 50.00th=[ 2507], 60.00th=[ 2671], | 70.00th=[ 2966], 80.00th=[ 4146], 90.00th=[ 6521], 95.00th=[10552], | 99.00th=[15008], 99.50th=[22676], 99.90th=[25297], 99.95th=[25297], | 99.99th=[25297] bw ( KiB/s): min= 288, max= 1312, per=27.50%, avg=656.79, stdev=283.35, samples=19 iops : min= 18, max= 82, avg=41.00, stdev=17.77, samples=19 write: IOPS=39, BW=628KiB/s (643kB/s)(6032KiB/9609msec) clat (usec): min=11942, max=51687, avg=21727.08, stdev=6449.04 lat (usec): min=11942, max=51688, avg=21728.04, stdev=6449.10 clat percentiles (usec): | 1.00th=[12387], 5.00th=[13435], 10.00th=[13960], 20.00th=[15664], | 30.00th=[17695], 40.00th=[20055], 50.00th=[21365], 60.00th=[22414], | 70.00th=[23462], 80.00th=[25822], 90.00th=[30540], 95.00th=[34341], | 99.00th=[43254], 99.50th=[43779], 99.90th=[51643], 99.95th=[51643], | 99.99th=[51643] bw ( KiB/s): min= 448, max= 800, per=24.81%, avg=623.11, stdev=136.81, samples=19 iops : min= 28, max= 50, avg=38.89, stdev= 8.61, samples=19 lat (msec) : 2=1.43%, 4=38.93%, 10=7.68%, 20=21.61%, 50=30.21% lat (msec) : 100=0.13% cpu : usr=0.06%, sys=5.19%, ctx=844, majf=0, minf=33 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=391,377,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=16 rand-rw: (groupid=0, jobs=1): err= 0: pid=31778: Thu Apr 18 22:35:13 2024 read: IOPS=38, BW=610KiB/s (624kB/s)(6032KiB/9893msec) clat (usec): min=1950, max=21775, avg=3628.28, stdev=3003.34 lat (usec): min=1950, max=21775, avg=3628.86, stdev=3003.34 clat percentiles (usec): | 1.00th=[ 1975], 5.00th=[ 2040], 10.00th=[ 2114], 20.00th=[ 2180], | 30.00th=[ 2245], 40.00th=[ 2343], 50.00th=[ 2442], 60.00th=[ 2638], | 70.00th=[ 2933], 80.00th=[ 4015], 90.00th=[ 6718], 95.00th=[11076], | 99.00th=[16712], 99.50th=[18482], 99.90th=[21890], 99.95th=[21890], | 99.99th=[21890] bw ( KiB/s): min= 320, max= 896, per=25.17%, avg=601.05, stdev=187.84, samples=19 iops : min= 20, max= 56, avg=37.42, stdev=11.87, samples=19 write: IOPS=39, BW=632KiB/s (648kB/s)(6256KiB/9893msec) clat (usec): min=11904, max=65599, avg=21787.59, stdev=6895.52 lat (usec): min=11905, max=65600, avg=21788.60, stdev=6895.56 clat percentiles (usec): | 1.00th=[12256], 5.00th=[13304], 10.00th=[13960], 20.00th=[15401], | 30.00th=[17433], 40.00th=[19792], 50.00th=[21103], 60.00th=[22414], | 70.00th=[23725], 80.00th=[26346], 90.00th=[31589], 95.00th=[34866], | 99.00th=[41681], 99.50th=[45876], 99.90th=[65799], 99.95th=[65799], | 99.99th=[65799] bw ( KiB/s): min= 384, max= 832, per=25.00%, avg=627.89, stdev=154.38, samples=19 iops : min= 24, max= 52, avg=39.11, stdev= 9.69, samples=19 lat (msec) : 2=1.04%, 4=38.15%, 10=7.03%, 20=24.48%, 50=29.17% lat (msec) : 100=0.13% cpu : usr=0.06%, sys=5.12%, ctx=861, majf=0, minf=33 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=377,391,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=16 Run status group 0 (all jobs): READ: bw=2388KiB/s (2446kB/s), 573KiB/s-651KiB/s (586kB/s-667kB/s), io=23.4MiB (24.5MB), run=9609-10029msec WRITE: bw=2513KiB/s (2573kB/s), 628KiB/s-653KiB/s (643kB/s-668kB/s), io=24.6MiB (25.8MB), run=9609-10029msec mix direct rw 1048576 by fio with 4 jobs... mix buffer rw 1048576 by fio with 4 jobs... rand-rw: (g=0): rw=randrw, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=psync, iodepth=16 ... fio-3.7 Starting 4 processes rand-rw: (g=0): rw=randrw, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=psync, iodepth=16 ... fio-3.7 Starting 4 processes rand-rw: (groupid=0, jobs=1): err= 0: pid=31792: Thu Apr 18 22:35:17 2024 read: IOPS=1, BW=1856KiB/s (1900kB/s)(7168KiB/3863msec) clat (usec): min=344, max=737078, avg=430538.54, stdev=288359.81 lat (usec): min=344, max=737080, avg=430539.45, stdev=288360.04 clat percentiles (usec): | 1.00th=[ 347], 5.00th=[ 347], 10.00th=[ 347], 20.00th=[ 44827], | 30.00th=[517997], 40.00th=[517997], 50.00th=[534774], 60.00th=[557843], | 70.00th=[557843], 80.00th=[624952], 90.00th=[734004], 95.00th=[734004], | 99.00th=[734004], 99.50th=[734004], 99.90th=[734004], 99.95th=[734004], | 99.99th=[734004] bw ( KiB/s): min= 2048, max= 2048, per=38.63%, avg=2048.00, stdev= 0.00, samples=6 iops : min= 2, max= 2, avg= 2.00, stdev= 0.00, samples=6 write: IOPS=1, BW=1325KiB/s (1357kB/s)(5120KiB/3863msec) clat (msec): min=23, max=642, avg=169.59, stdev=267.88 lat (msec): min=23, max=642, avg=169.64, stdev=267.87 clat percentiles (msec): | 1.00th=[ 24], 5.00th=[ 24], 10.00th=[ 24], 20.00th=[ 24], | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 31], 60.00th=[ 31], | 70.00th=[ 128], 80.00th=[ 128], 90.00th=[ 642], 95.00th=[ 642], | 99.00th=[ 642], 99.50th=[ 642], 99.90th=[ 642], 99.95th=[ 642], | 99.99th=[ 642] bw ( KiB/s): min= 2048, max= 6144, per=45.99%, avg=3413.33, stdev=2364.83, samples=3 iops : min= 2, max= 6, avg= 3.33, stdev= 2.31, samples=3 lat (usec) : 500=8.33% lat (msec) : 50=33.33%, 250=8.33%, 750=50.00% cpu : usr=0.00%, sys=15.74%, ctx=1430, majf=0, minf=35 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=7,5,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=16 rand-rw: (groupid=0, jobs=1): err= 0: pid=31793: Thu Apr 18 22:35:17 2024 read: IOPS=0, BW=635KiB/s (650kB/s)(2048KiB/3226msec) clat (msec): min=50, max=494, avg=272.49, stdev=314.32 lat (msec): min=50, max=494, avg=272.49, stdev=314.31 clat percentiles (msec): | 1.00th=[ 51], 5.00th=[ 51], 10.00th=[ 51], 20.00th=[ 51], | 30.00th=[ 51], 40.00th=[ 51], 50.00th=[ 51], 60.00th=[ 493], | 70.00th=[ 493], 80.00th=[ 493], 90.00th=[ 493], 95.00th=[ 493], | 99.00th=[ 493], 99.50th=[ 493], 99.90th=[ 493], 99.95th=[ 493], | 99.99th=[ 493] bw ( KiB/s): min= 2048, max= 2048, per=38.63%, avg=2048.00, stdev= 0.00, samples=2 iops : min= 2, max= 2, avg= 2.00, stdev= 0.00, samples=2 write: IOPS=3, BW=3174KiB/s (3250kB/s)(10.0MiB/3226msec) clat (msec): min=19, max=1412, avg=267.98, stdev=515.39 lat (msec): min=19, max=1412, avg=268.02, stdev=515.38 clat percentiles (msec): | 1.00th=[ 20], 5.00th=[ 20], 10.00th=[ 20], 20.00th=[ 21], | 30.00th=[ 22], 40.00th=[ 23], 50.00th=[ 24], 60.00th=[ 28], | 70.00th=[ 51], 80.00th=[ 51], 90.00th=[ 1053], 95.00th=[ 1418], | 99.00th=[ 1418], 99.50th=[ 1418], 99.90th=[ 1418], 99.95th=[ 1418], | 99.99th=[ 1418] bw ( KiB/s): min= 4087, max= 6144, per=64.34%, avg=4775.67, stdev=1185.02, samples=3 iops : min= 3, max= 6, avg= 4.33, stdev= 1.53, samples=3 lat (msec) : 20=8.33%, 50=50.00%, 100=16.67%, 500=8.33% cpu : usr=0.00%, sys=8.93%, ctx=350, majf=0, minf=33 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=2,10,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=16 rand-rw: (groupid=0, jobs=1): err= 0: pid=31794: Thu Apr 18 22:35:17 2024 read: IOPS=1, BW=1601KiB/s (1639kB/s)(6144KiB/3838msec) clat (usec): min=317, max=657059, avg=298209.35, stdev=311838.39 lat (usec): min=317, max=657060, avg=298210.07, stdev=311838.41 clat percentiles (usec): | 1.00th=[ 318], 5.00th=[ 318], 10.00th=[ 318], 20.00th=[ 545], | 30.00th=[ 545], 40.00th=[ 49546], 50.00th=[ 49546], 60.00th=[526386], | 70.00th=[557843], 80.00th=[557843], 90.00th=[658506], 95.00th=[658506], | 99.00th=[658506], 99.50th=[658506], 99.90th=[658506], 99.95th=[658506], | 99.99th=[658506] bw ( KiB/s): min= 2048, max= 4096, per=64.39%, avg=3413.33, stdev=1182.41, samples=3 iops : min= 2, max= 4, avg= 3.33, stdev= 1.15, samples=3 write: IOPS=1, BW=1601KiB/s (1639kB/s)(6144KiB/3838msec) clat (msec): min=20, max=1389, avg=341.30, stdev=541.95 lat (msec): min=20, max=1389, avg=341.33, stdev=541.94 clat percentiles (msec): | 1.00th=[ 21], 5.00th=[ 21], 10.00th=[ 21], 20.00th=[ 25], | 30.00th=[ 25], 40.00th=[ 61], 50.00th=[ 61], 60.00th=[ 79], | 70.00th=[ 477], 80.00th=[ 477], 90.00th=[ 1385], 95.00th=[ 1385], | 99.00th=[ 1385], 99.50th=[ 1385], 99.90th=[ 1385], 99.95th=[ 1385], | 99.99th=[ 1385] bw ( KiB/s): min= 2043, max= 4096, per=33.10%, avg=2456.60, stdev=916.46, samples=5 iops : min= 1, max= 4, avg= 2.20, stdev= 1.10, samples=5 lat (usec) : 500=8.33%, 750=8.33% lat (msec) : 50=25.00%, 100=16.67%, 500=8.33%, 750=25.00% cpu : usr=0.00%, sys=9.64%, ctx=833, majf=0, minf=34 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=6,6,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=16 rand-rw: (groupid=0, jobs=1): err= 0: pid=31795: Thu Apr 18 22:35:17 2024 read: IOPS=1, BW=1334KiB/s (1366kB/s)(5120KiB/3839msec) clat (msec): min=35, max=623, avg=425.53, stdev=232.34 lat (msec): min=35, max=623, avg=425.53, stdev=232.34 clat percentiles (msec): | 1.00th=[ 36], 5.00th=[ 36], 10.00th=[ 36], 20.00th=[ 36], | 30.00th=[ 426], 40.00th=[ 426], 50.00th=[ 468], 60.00th=[ 468], | 70.00th=[ 575], 80.00th=[ 575], 90.00th=[ 625], 95.00th=[ 625], | 99.00th=[ 625], 99.50th=[ 625], 99.90th=[ 625], 99.95th=[ 625], | 99.99th=[ 625] bw ( KiB/s): min= 2048, max= 4096, per=51.51%, avg=2730.67, stdev=1182.41, samples=3 iops : min= 2, max= 4, avg= 2.67, stdev= 1.15, samples=3 write: IOPS=1, BW=1867KiB/s (1912kB/s)(7168KiB/3839msec) clat (msec): min=17, max=1382, avg=244.18, stdev=503.26 lat (msec): min=17, max=1382, avg=244.22, stdev=503.26 clat percentiles (msec): | 1.00th=[ 18], 5.00th=[ 18], 10.00th=[ 18], 20.00th=[ 23], | 30.00th=[ 44], 40.00th=[ 44], 50.00th=[ 49], 60.00th=[ 80], | 70.00th=[ 80], 80.00th=[ 116], 90.00th=[ 1385], 95.00th=[ 1385], | 99.00th=[ 1385], 99.50th=[ 1385], 99.90th=[ 1385], 99.95th=[ 1385], | 99.99th=[ 1385] bw ( KiB/s): min= 2043, max= 6144, per=48.27%, avg=3582.75, stdev=1962.12, samples=4 iops : min= 1, max= 6, avg= 3.25, stdev= 2.22, samples=4 lat (msec) : 20=8.33%, 50=33.33%, 100=8.33%, 250=8.33%, 500=16.67% lat (msec) : 750=16.67% cpu : usr=0.00%, sys=10.34%, ctx=1124, majf=0, minf=30 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=5,7,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=16 Run status group 0 (all jobs): READ: bw=5302KiB/s (5429kB/s), 635KiB/s-1856KiB/s (650kB/s-1900kB/s), io=20.0MiB (20.0MB), run=3226-3863msec WRITE: bw=7422KiB/s (7600kB/s), 1325KiB/s-3174KiB/s (1357kB/s-3250kB/s), io=28.0MiB (29.4MB), run=3226-3863msec rand-rw: (groupid=0, jobs=1): err= 0: pid=31796: Thu Apr 18 22:35:17 2024 read: IOPS=1, BW=1849KiB/s (1893kB/s)(7168KiB/3877msec) clat (msec): min=26, max=729, avg=421.02, stdev=274.34 lat (msec): min=26, max=729, avg=421.02, stdev=274.34 clat percentiles (msec): | 1.00th=[ 27], 5.00th=[ 27], 10.00th=[ 27], 20.00th=[ 51], | 30.00th=[ 493], 40.00th=[ 493], 50.00th=[ 498], 60.00th=[ 527], | 70.00th=[ 527], 80.00th=[ 625], 90.00th=[ 735], 95.00th=[ 735], | 99.00th=[ 735], 99.50th=[ 735], 99.90th=[ 735], 99.95th=[ 735], | 99.99th=[ 735] bw ( KiB/s): min= 2015, max= 2048, per=38.70%, avg=2041.67, stdev=13.22, samples=6 iops : min= 1, max= 2, avg= 1.67, stdev= 0.52, samples=6 write: IOPS=1, BW=1321KiB/s (1352kB/s)(5120KiB/3877msec) clat (msec): min=36, max=664, avg=185.72, stdev=268.65 lat (msec): min=37, max=664, avg=185.76, stdev=268.65 clat percentiles (msec): | 1.00th=[ 37], 5.00th=[ 37], 10.00th=[ 37], 20.00th=[ 37], | 30.00th=[ 59], 40.00th=[ 59], 50.00th=[ 65], 60.00th=[ 65], | 70.00th=[ 104], 80.00th=[ 104], 90.00th=[ 667], 95.00th=[ 667], | 99.00th=[ 667], 99.50th=[ 667], 99.90th=[ 667], 99.95th=[ 667], | 99.99th=[ 667] bw ( KiB/s): min= 2048, max= 6047, per=45.78%, avg=3381.00, stdev=2308.82, samples=3 iops : min= 2, max= 5, avg= 3.00, stdev= 1.73, samples=3 lat (msec) : 50=16.67%, 100=25.00%, 250=8.33%, 500=16.67%, 750=33.33% cpu : usr=0.03%, sys=3.38%, ctx=45, majf=0, minf=35 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=7,5,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=16 rand-rw: (groupid=0, jobs=1): err= 0: pid=31797: Thu Apr 18 22:35:17 2024 read: IOPS=0, BW=622KiB/s (637kB/s)(2048KiB/3294msec) clat (msec): min=106, max=285, avg=196.02, stdev=126.53 lat (msec): min=106, max=285, avg=196.02, stdev=126.53 clat percentiles (msec): | 1.00th=[ 107], 5.00th=[ 107], 10.00th=[ 107], 20.00th=[ 107], | 30.00th=[ 107], 40.00th=[ 107], 50.00th=[ 107], 60.00th=[ 288], | 70.00th=[ 288], 80.00th=[ 288], 90.00th=[ 288], 95.00th=[ 288], | 99.00th=[ 288], 99.50th=[ 288], 99.90th=[ 288], 99.95th=[ 288], | 99.99th=[ 288] bw ( KiB/s): min= 2048, max= 2048, per=38.82%, avg=2048.00, stdev= 0.00, samples=2 iops : min= 2, max= 2, avg= 2.00, stdev= 0.00, samples=2 write: IOPS=3, BW=3109KiB/s (3183kB/s)(10.0MiB/3294msec) clat (msec): min=27, max=1393, avg=289.89, stdev=496.86 lat (msec): min=27, max=1393, avg=289.93, stdev=496.85 clat percentiles (msec): | 1.00th=[ 28], 5.00th=[ 28], 10.00th=[ 28], 20.00th=[ 28], | 30.00th=[ 36], 40.00th=[ 39], 50.00th=[ 59], 60.00th=[ 77], | 70.00th=[ 122], 80.00th=[ 122], 90.00th=[ 1045], 95.00th=[ 1401], | 99.00th=[ 1401], 99.50th=[ 1401], 99.90th=[ 1401], 99.95th=[ 1401], | 99.99th=[ 1401] bw ( KiB/s): min= 4096, max=10240, per=97.06%, avg=7168.00, stdev=4344.46, samples=2 iops : min= 4, max= 10, avg= 7.00, stdev= 4.24, samples=2 lat (msec) : 50=33.33%, 100=25.00%, 250=16.67%, 500=8.33% cpu : usr=0.00%, sys=3.46%, ctx=98, majf=0, minf=31 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=2,10,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=16 rand-rw: (groupid=0, jobs=1): err= 0: pid=31798: Thu Apr 18 22:35:17 2024 read: IOPS=1, BW=1583KiB/s (1621kB/s)(6144KiB/3882msec) clat (msec): min=17, max=628, avg=286.26, stdev=281.15 lat (msec): min=17, max=628, avg=286.26, stdev=281.15 clat percentiles (msec): | 1.00th=[ 18], 5.00th=[ 18], 10.00th=[ 18], 20.00th=[ 40], | 30.00th=[ 40], 40.00th=[ 45], 50.00th=[ 45], 60.00th=[ 485], | 70.00th=[ 502], 80.00th=[ 502], 90.00th=[ 625], 95.00th=[ 625], | 99.00th=[ 625], 99.50th=[ 625], 99.90th=[ 625], 99.95th=[ 625], | 99.99th=[ 625] bw ( KiB/s): min= 2048, max= 4096, per=64.65%, avg=3410.33, stdev=1179.82, samples=3 iops : min= 2, max= 4, avg= 3.00, stdev= 1.00, samples=3 write: IOPS=1, BW=1583KiB/s (1621kB/s)(6144KiB/3882msec) clat (msec): min=37, max=1382, avg=360.19, stdev=517.73 lat (msec): min=37, max=1382, avg=360.22, stdev=517.72 clat percentiles (msec): | 1.00th=[ 38], 5.00th=[ 38], 10.00th=[ 38], 20.00th=[ 39], | 30.00th=[ 39], 40.00th=[ 138], 50.00th=[ 138], 60.00th=[ 167], | 70.00th=[ 397], 80.00th=[ 397], 90.00th=[ 1385], 95.00th=[ 1385], | 99.00th=[ 1385], 99.50th=[ 1385], 99.90th=[ 1385], 99.95th=[ 1385], | 99.99th=[ 1385] bw ( KiB/s): min= 2048, max= 4096, per=41.57%, avg=3069.75, stdev=1179.82, samples=4 iops : min= 2, max= 4, avg= 2.75, stdev= 0.96, samples=4 lat (msec) : 20=8.33%, 50=33.33%, 250=16.67%, 500=16.67%, 750=16.67% cpu : usr=0.00%, sys=2.78%, ctx=56, majf=0, minf=33 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=6,6,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=16 rand-rw: (groupid=0, jobs=1): err= 0: pid=31799: Thu Apr 18 22:35:17 2024 read: IOPS=1, BW=1333KiB/s (1365kB/s)(5120KiB/3842msec) clat (msec): min=17, max=1048, avg=391.49, stdev=414.09 lat (msec): min=17, max=1048, avg=391.49, stdev=414.09 clat percentiles (msec): | 1.00th=[ 18], 5.00th=[ 18], 10.00th=[ 18], 20.00th=[ 18], | 30.00th=[ 62], 40.00th=[ 62], 50.00th=[ 368], 60.00th=[ 368], | 70.00th=[ 464], 80.00th=[ 464], 90.00th=[ 1045], 95.00th=[ 1045], | 99.00th=[ 1045], 99.50th=[ 1045], 99.90th=[ 1045], 99.95th=[ 1045], | 99.99th=[ 1045] bw ( KiB/s): min= 2048, max= 4096, per=51.77%, avg=2730.67, stdev=1182.41, samples=3 iops : min= 2, max= 4, avg= 2.67, stdev= 1.15, samples=3 write: IOPS=1, BW=1866KiB/s (1910kB/s)(7168KiB/3842msec) clat (msec): min=35, max=1363, avg=268.23, stdev=485.05 lat (msec): min=35, max=1363, avg=268.27, stdev=485.04 clat percentiles (msec): | 1.00th=[ 36], 5.00th=[ 36], 10.00th=[ 36], 20.00th=[ 41], | 30.00th=[ 59], 40.00th=[ 59], 50.00th=[ 91], 60.00th=[ 144], | 70.00th=[ 144], 80.00th=[ 146], 90.00th=[ 1368], 95.00th=[ 1368], | 99.00th=[ 1368], 99.50th=[ 1368], 99.90th=[ 1368], 99.95th=[ 1368], | 99.99th=[ 1368] bw ( KiB/s): min= 2048, max= 6144, per=64.71%, avg=4778.67, stdev=2364.83, samples=3 iops : min= 2, max= 6, avg= 4.67, stdev= 2.31, samples=3 lat (msec) : 20=8.33%, 50=16.67%, 100=25.00%, 250=16.67%, 500=16.67% cpu : usr=0.00%, sys=3.44%, ctx=39, majf=0, minf=32 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=5,7,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=16 Run status group 0 (all jobs): READ: bw=5276KiB/s (5402kB/s), 622KiB/s-1849KiB/s (637kB/s-1893kB/s), io=20.0MiB (20.0MB), run=3294-3882msec WRITE: bw=7386KiB/s (7563kB/s), 1321KiB/s-3109KiB/s (1352kB/s-3183kB/s), io=28.0MiB (29.4MB), run=3294-3882msec mix direct rw 4194304 by fio with 4 jobs... mix buffer rw 4194304 by fio with 4 jobs... rand-rw: (g=0): rw=randrw, bs=(R) 4096KiB-4096KiB, (W) 4096KiB-4096KiB, (T) 4096KiB-4096KiB, ioengine=psync, iodepth=16 ... fio-3.7 Starting 4 processes rand-rw: (g=0): rw=randrw, bs=(R) 4096KiB-4096KiB, (W) 4096KiB-4096KiB, (T) 4096KiB-4096KiB, ioengine=psync, iodepth=16 ... fio-3.7 Starting 4 processes rand-rw: (groupid=0, jobs=1): err= 0: pid=31808: Thu Apr 18 22:35:20 2024 read: IOPS=3, BW=15.0MiB/s (15.7MB/s)(12.0MiB/800msec) clat (msec): min=26, max=712, avg=263.98, stdev=388.90 lat (msec): min=26, max=712, avg=263.99, stdev=388.90 clat percentiles (msec): | 1.00th=[ 27], 5.00th=[ 27], 10.00th=[ 27], 20.00th=[ 27], | 30.00th=[ 27], 40.00th=[ 53], 50.00th=[ 53], 60.00th=[ 53], | 70.00th=[ 709], 80.00th=[ 709], 90.00th=[ 709], 95.00th=[ 709], | 99.00th=[ 709], 99.50th=[ 709], 99.90th=[ 709], 99.95th=[ 709], | 99.99th=[ 709] bw ( KiB/s): min= 8192, max= 8192, per=82.05%, avg=8192.00, stdev= 0.00, samples=1 iops : min= 2, max= 2, avg= 2.00, stdev= 0.00, samples=1 lat (msec) : 50=33.33%, 100=33.33%, 750=33.33% cpu : usr=0.00%, sys=85.23%, ctx=89, majf=0, minf=32 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=3,0,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=16 rand-rw: (groupid=0, jobs=1): err= 0: pid=31810: Thu Apr 18 22:35:20 2024 read: IOPS=0, BW=2496KiB/s (2556kB/s)(4096KiB/1641msec) clat (nsec): min=70365k, max=70365k, avg=70365358.00, stdev= 0.00 lat (nsec): min=70368k, max=70368k, avg=70367514.00, stdev= 0.00 clat percentiles (usec): | 1.00th=[70779], 5.00th=[70779], 10.00th=[70779], 20.00th=[70779], | 30.00th=[70779], 40.00th=[70779], 50.00th=[70779], 60.00th=[70779], | 70.00th=[70779], 80.00th=[70779], 90.00th=[70779], 95.00th=[70779], | 99.00th=[70779], 99.50th=[70779], 99.90th=[70779], 99.95th=[70779], | 99.99th=[70779] bw ( KiB/s): min= 8192, max= 8192, per=82.05%, avg=8192.00, stdev= 0.00, samples=1 iops : min= 2, max= 2, avg= 2.00, stdev= 0.00, samples=1 write: IOPS=1, BW=4992KiB/s (5112kB/s)(8192KiB/1641msec) clat (msec): min=508, max=1050, avg=779.13, stdev=383.06 lat (msec): min=508, max=1050, avg=779.29, stdev=383.00 clat percentiles (msec): | 1.00th=[ 510], 5.00th=[ 510], 10.00th=[ 510], 20.00th=[ 510], | 30.00th=[ 510], 40.00th=[ 510], 50.00th=[ 510], 60.00th=[ 1053], | 70.00th=[ 1053], 80.00th=[ 1053], 90.00th=[ 1053], 95.00th=[ 1053], | 99.00th=[ 1053], 99.50th=[ 1053], 99.90th=[ 1053], 99.95th=[ 1053], | 99.99th=[ 1053] bw ( KiB/s): min= 8192, max= 8192, per=41.03%, avg=8192.00, stdev= 0.00, samples=1 iops : min= 2, max= 2, avg= 2.00, stdev= 0.00, samples=1 lat (msec) : 100=33.33%, 750=33.33% cpu : usr=0.00%, sys=13.84%, ctx=607, majf=0, minf=30 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=1,2,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=16 rand-rw: (groupid=0, jobs=1): err= 0: pid=31812: Thu Apr 18 22:35:20 2024 write: IOPS=2, BW=9029KiB/s (9245kB/s)(12.0MiB/1361msec) clat (msec): min=158, max=992, avg=450.91, stdev=469.72 lat (msec): min=158, max=992, avg=451.18, stdev=469.71 clat percentiles (msec): | 1.00th=[ 159], 5.00th=[ 159], 10.00th=[ 159], 20.00th=[ 159], | 30.00th=[ 159], 40.00th=[ 203], 50.00th=[ 203], 60.00th=[ 203], | 70.00th=[ 995], 80.00th=[ 995], 90.00th=[ 995], 95.00th=[ 995], | 99.00th=[ 995], 99.50th=[ 995], 99.90th=[ 995], 99.95th=[ 995], | 99.99th=[ 995] bw ( KiB/s): min= 8192, max= 8192, per=41.03%, avg=8192.00, stdev= 0.00, samples=1 iops : min= 2, max= 2, avg= 2.00, stdev= 0.00, samples=1 lat (msec) : 250=66.67%, 1000=33.33% cpu : usr=0.00%, sys=21.18%, ctx=68, majf=0, minf=31 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=0,3,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=16 rand-rw: (groupid=0, jobs=1): err= 0: pid=31814: Thu Apr 18 22:35:20 2024 write: IOPS=1, BW=7758KiB/s (7944kB/s)(12.0MiB/1584msec) clat (msec): min=91, max=821, avg=522.77, stdev=382.34 lat (msec): min=92, max=821, avg=523.02, stdev=382.34 clat percentiles (msec): | 1.00th=[ 92], 5.00th=[ 92], 10.00th=[ 92], 20.00th=[ 92], | 30.00th=[ 92], 40.00th=[ 659], 50.00th=[ 659], 60.00th=[ 659], | 70.00th=[ 818], 80.00th=[ 818], 90.00th=[ 818], 95.00th=[ 818], | 99.00th=[ 818], 99.50th=[ 818], 99.90th=[ 818], 99.95th=[ 818], | 99.99th=[ 818] bw ( KiB/s): min= 8192, max= 8192, per=41.03%, avg=8192.00, stdev= 0.00, samples=2 iops : min= 2, max= 2, avg= 2.00, stdev= 0.00, samples=2 lat (msec) : 100=33.33%, 750=33.33%, 1000=33.33% cpu : usr=0.00%, sys=16.11%, ctx=66, majf=0, minf=30 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=0,3,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=16 Run status group 0 (all jobs): READ: bw=9984KiB/s (10.2MB/s), 2496KiB/s-15.0MiB/s (2556kB/s-15.7MB/s), io=16.0MiB (16.8MB), run=800-1641msec WRITE: bw=19.5MiB/s (20.4MB/s), 4992KiB/s-9029KiB/s (5112kB/s-9245kB/s), io=32.0MiB (33.6MB), run=1361-1641msec rand-rw: (groupid=0, jobs=1): err= 0: pid=31809: Thu Apr 18 22:35:20 2024 read: IOPS=2, BW=8815KiB/s (9026kB/s)(12.0MiB/1394msec) clat (msec): min=135, max=977, avg=464.19, stdev=450.64 lat (msec): min=135, max=977, avg=464.20, stdev=450.64 clat percentiles (msec): | 1.00th=[ 136], 5.00th=[ 136], 10.00th=[ 136], 20.00th=[ 136], | 30.00th=[ 136], 40.00th=[ 279], 50.00th=[ 279], 60.00th=[ 279], | 70.00th=[ 978], 80.00th=[ 978], 90.00th=[ 978], 95.00th=[ 978], | 99.00th=[ 978], 99.50th=[ 978], 99.90th=[ 978], 99.95th=[ 978], | 99.99th=[ 978] bw ( KiB/s): min= 8192, max= 8192, per=87.16%, avg=8192.00, stdev= 0.00, samples=1 iops : min= 2, max= 2, avg= 2.00, stdev= 0.00, samples=1 lat (msec) : 250=33.33%, 500=33.33%, 1000=33.33% cpu : usr=0.07%, sys=6.32%, ctx=27, majf=0, minf=31 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=3,0,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=16 rand-rw: (groupid=0, jobs=1): err= 0: pid=31811: Thu Apr 18 22:35:20 2024 read: IOPS=0, BW=2350KiB/s (2406kB/s)(4096KiB/1743msec) clat (nsec): min=243251k, max=243251k, avg=243251461.00, stdev= 0.00 lat (nsec): min=243253k, max=243253k, avg=243253235.00, stdev= 0.00 clat percentiles (msec): | 1.00th=[ 243], 5.00th=[ 243], 10.00th=[ 243], 20.00th=[ 243], | 30.00th=[ 243], 40.00th=[ 243], 50.00th=[ 243], 60.00th=[ 243], | 70.00th=[ 243], 80.00th=[ 243], 90.00th=[ 243], 95.00th=[ 243], | 99.00th=[ 243], 99.50th=[ 243], 99.90th=[ 243], 99.95th=[ 243], | 99.99th=[ 243] bw ( KiB/s): min= 8192, max= 8192, per=87.16%, avg=8192.00, stdev= 0.00, samples=1 iops : min= 2, max= 2, avg= 2.00, stdev= 0.00, samples=1 write: IOPS=1, BW=4700KiB/s (4813kB/s)(8192KiB/1743msec) clat (msec): min=525, max=973, avg=749.34, stdev=316.74 lat (msec): min=525, max=973, avg=749.57, stdev=316.60 clat percentiles (msec): | 1.00th=[ 527], 5.00th=[ 527], 10.00th=[ 527], 20.00th=[ 527], | 30.00th=[ 527], 40.00th=[ 527], 50.00th=[ 527], 60.00th=[ 978], | 70.00th=[ 978], 80.00th=[ 978], 90.00th=[ 978], 95.00th=[ 978], | 99.00th=[ 978], 99.50th=[ 978], 99.90th=[ 978], 99.95th=[ 978], | 99.99th=[ 978] bw ( KiB/s): min= 7968, max= 7968, per=44.40%, avg=7968.00, stdev= 0.00, samples=1 iops : min= 1, max= 1, avg= 1.00, stdev= 0.00, samples=1 lat (msec) : 250=33.33%, 750=33.33%, 1000=33.33% cpu : usr=0.00%, sys=6.14%, ctx=64, majf=0, minf=32 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=1,2,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=16 rand-rw: (groupid=0, jobs=1): err= 0: pid=31813: Thu Apr 18 22:35:20 2024 write: IOPS=2, BW=8463KiB/s (8666kB/s)(12.0MiB/1452msec) clat (msec): min=118, max=1004, avg=483.44, stdev=463.44 lat (msec): min=118, max=1005, avg=483.64, stdev=463.44 clat percentiles (msec): | 1.00th=[ 118], 5.00th=[ 118], 10.00th=[ 118], 20.00th=[ 118], | 30.00th=[ 118], 40.00th=[ 330], 50.00th=[ 330], 60.00th=[ 330], | 70.00th=[ 1003], 80.00th=[ 1003], 90.00th=[ 1003], 95.00th=[ 1003], | 99.00th=[ 1003], 99.50th=[ 1003], 99.90th=[ 1003], 99.95th=[ 1003], | 99.99th=[ 1003] bw ( KiB/s): min= 8192, max= 8192, per=45.65%, avg=8192.00, stdev= 0.00, samples=1 iops : min= 2, max= 2, avg= 2.00, stdev= 0.00, samples=1 lat (msec) : 250=33.33%, 500=33.33% cpu : usr=0.07%, sys=6.27%, ctx=20, majf=0, minf=31 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=0,3,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=16 rand-rw: (groupid=0, jobs=1): err= 0: pid=31815: Thu Apr 18 22:35:20 2024 write: IOPS=1, BW=6729KiB/s (6891kB/s)(12.0MiB/1826msec) clat (msec): min=106, max=977, avg=606.89, stdev=449.44 lat (msec): min=107, max=977, avg=607.18, stdev=449.42 clat percentiles (msec): | 1.00th=[ 107], 5.00th=[ 107], 10.00th=[ 107], 20.00th=[ 107], | 30.00th=[ 107], 40.00th=[ 735], 50.00th=[ 735], 60.00th=[ 735], | 70.00th=[ 978], 80.00th=[ 978], 90.00th=[ 978], 95.00th=[ 978], | 99.00th=[ 978], 99.50th=[ 978], 99.90th=[ 978], 99.95th=[ 978], | 99.99th=[ 978] bw ( KiB/s): min= 8192, max= 8192, per=45.65%, avg=8192.00, stdev= 0.00, samples=1 iops : min= 2, max= 2, avg= 2.00, stdev= 0.00, samples=1 lat (msec) : 250=33.33%, 750=33.33%, 1000=33.33% cpu : usr=0.00%, sys=6.85%, ctx=60, majf=0, minf=28 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=0,3,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=16 Run status group 0 (all jobs): READ: bw=9400KiB/s (9625kB/s), 2350KiB/s-8815KiB/s (2406kB/s-9026kB/s), io=16.0MiB (16.8MB), run=1394-1743msec WRITE: bw=17.5MiB/s (18.4MB/s), 4700KiB/s-8463KiB/s (4813kB/s-8666kB/s), io=32.0MiB (33.6MB), run=1452-1826msec PASS 398b (63s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 398c: run fio to test AIO ================= 22:35:25 (1713494125) /usr/bin/fio debug=0 40+0 records in 40+0 records out 41943040 bytes (42 MB) copied, 0.0967855 s, 433 MB/s osc.lustre-OST0000-osc-ffff88012c001800.rpc_stats=clear writing 40M to OST0 by fio with 4 jobs... rand-write: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=16 ... fio-3.7 Starting 4 processes rand-write: (groupid=0, jobs=1): err= 0: pid=32581: Thu Apr 18 22:36:01 2024 write: IOPS=74, BW=298KiB/s (305kB/s)(10.0MiB/34340msec) slat (usec): min=12, max=596, avg=84.00, stdev=55.82 clat (msec): min=65, max=343, avg=214.17, stdev=40.74 lat (msec): min=65, max=343, avg=214.26, stdev=40.75 clat percentiles (msec): | 1.00th=[ 107], 5.00th=[ 148], 10.00th=[ 163], 20.00th=[ 184], | 30.00th=[ 197], 40.00th=[ 207], 50.00th=[ 215], 60.00th=[ 226], | 70.00th=[ 236], 80.00th=[ 249], 90.00th=[ 264], 95.00th=[ 275], | 99.00th=[ 300], 99.50th=[ 313], 99.90th=[ 321], 99.95th=[ 330], | 99.99th=[ 342] bw ( KiB/s): min= 208, max= 512, per=24.91%, avg=296.94, stdev=48.02, samples=68 iops : min= 52, max= 128, avg=74.15, stdev=12.02, samples=68 lat (msec) : 100=0.74%, 250=79.84%, 500=19.41% cpu : usr=0.13%, sys=0.77%, ctx=1953, majf=0, minf=29 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=99.4%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=0,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=16 rand-write: (groupid=0, jobs=1): err= 0: pid=32582: Thu Apr 18 22:36:01 2024 write: IOPS=74, BW=298KiB/s (305kB/s)(10.0MiB/34340msec) slat (usec): min=13, max=564, avg=82.44, stdev=53.80 clat (msec): min=65, max=344, avg=214.19, stdev=40.24 lat (msec): min=65, max=344, avg=214.27, stdev=40.25 clat percentiles (msec): | 1.00th=[ 107], 5.00th=[ 148], 10.00th=[ 165], 20.00th=[ 184], | 30.00th=[ 197], 40.00th=[ 205], 50.00th=[ 215], 60.00th=[ 224], | 70.00th=[ 236], 80.00th=[ 249], 90.00th=[ 264], 95.00th=[ 279], | 99.00th=[ 300], 99.50th=[ 313], 99.90th=[ 330], 99.95th=[ 330], | 99.99th=[ 347] bw ( KiB/s): min= 216, max= 512, per=24.90%, avg=296.82, stdev=48.12, samples=68 iops : min= 54, max= 128, avg=74.12, stdev=12.05, samples=68 lat (msec) : 100=0.74%, 250=80.39%, 500=18.87% cpu : usr=0.16%, sys=0.71%, ctx=2084, majf=0, minf=30 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=99.4%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=0,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=16 rand-write: (groupid=0, jobs=1): err= 0: pid=32583: Thu Apr 18 22:36:01 2024 write: IOPS=74, BW=298KiB/s (305kB/s)(10.0MiB/34329msec) slat (usec): min=13, max=592, avg=84.37, stdev=56.20 clat (msec): min=65, max=343, avg=214.10, stdev=40.16 lat (msec): min=65, max=343, avg=214.18, stdev=40.17 clat percentiles (msec): | 1.00th=[ 107], 5.00th=[ 148], 10.00th=[ 163], 20.00th=[ 184], | 30.00th=[ 197], 40.00th=[ 205], 50.00th=[ 215], 60.00th=[ 226], | 70.00th=[ 234], 80.00th=[ 249], 90.00th=[ 264], 95.00th=[ 275], | 99.00th=[ 300], 99.50th=[ 309], 99.90th=[ 334], 99.95th=[ 334], | 99.99th=[ 342] bw ( KiB/s): min= 224, max= 512, per=24.94%, avg=297.29, stdev=47.07, samples=68 iops : min= 56, max= 128, avg=74.24, stdev=11.80, samples=68 lat (msec) : 100=0.74%, 250=81.02%, 500=18.24% cpu : usr=0.11%, sys=0.78%, ctx=2098, majf=0, minf=30 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=99.4%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=0,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=16 rand-write: (groupid=0, jobs=1): err= 0: pid=32584: Thu Apr 18 22:36:01 2024 write: IOPS=74, BW=298KiB/s (305kB/s)(10.0MiB/34360msec) slat (usec): min=12, max=596, avg=82.86, stdev=53.59 clat (msec): min=30, max=343, avg=214.33, stdev=40.35 lat (msec): min=30, max=343, avg=214.41, stdev=40.36 clat percentiles (msec): | 1.00th=[ 108], 5.00th=[ 150], 10.00th=[ 163], 20.00th=[ 182], | 30.00th=[ 194], 40.00th=[ 207], 50.00th=[ 215], 60.00th=[ 226], | 70.00th=[ 234], 80.00th=[ 249], 90.00th=[ 266], 95.00th=[ 275], | 99.00th=[ 309], 99.50th=[ 317], 99.90th=[ 334], 99.95th=[ 334], | 99.99th=[ 342] bw ( KiB/s): min= 184, max= 512, per=24.86%, avg=296.28, stdev=49.46, samples=68 iops : min= 46, max= 128, avg=74.01, stdev=12.37, samples=68 lat (msec) : 50=0.08%, 100=0.31%, 250=80.04%, 500=19.57% cpu : usr=0.17%, sys=0.72%, ctx=2030, majf=0, minf=30 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=99.4%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=0,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=16 Run status group 0 (all jobs): WRITE: bw=1192KiB/s (1221kB/s), 298KiB/s-298KiB/s (305kB/s-305kB/s), io=40.0MiB (41.9MB), run=34329-34360msec mix rw 40M to OST0 by fio with 4 jobs... rand-rw: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=16 ... fio-3.7 Starting 4 processes rand-rw: (groupid=0, jobs=1): err= 0: pid=32621: Thu Apr 18 22:36:22 2024 read: IOPS=62, BW=251KiB/s (257kB/s)(5048KiB/20090msec) slat (usec): min=729, max=4551, avg=1550.44, stdev=390.45 clat (msec): min=3, max=132, avg=81.55, stdev=16.57 lat (msec): min=4, max=133, avg=83.11, stdev=16.65 clat percentiles (msec): | 1.00th=[ 43], 5.00th=[ 58], 10.00th=[ 64], 20.00th=[ 69], | 30.00th=[ 73], 40.00th=[ 78], 50.00th=[ 81], 60.00th=[ 86], | 70.00th=[ 89], 80.00th=[ 95], 90.00th=[ 104], 95.00th=[ 110], | 99.00th=[ 122], 99.50th=[ 127], 99.90th=[ 132], 99.95th=[ 133], | 99.99th=[ 133] bw ( KiB/s): min= 136, max= 352, per=25.06%, avg=250.05, stdev=53.27, samples=40 iops : min= 34, max= 88, avg=62.40, stdev=13.29, samples=40 write: IOPS=64, BW=258KiB/s (265kB/s)(5192KiB/20090msec) slat (usec): min=14, max=2005, avg=89.25, stdev=86.28 clat (msec): min=68, max=322, avg=166.31, stdev=39.22 lat (msec): min=68, max=322, avg=166.40, stdev=39.23 clat percentiles (msec): | 1.00th=[ 93], 5.00th=[ 108], 10.00th=[ 118], 20.00th=[ 134], | 30.00th=[ 144], 40.00th=[ 155], 50.00th=[ 163], 60.00th=[ 174], | 70.00th=[ 184], 80.00th=[ 197], 90.00th=[ 215], 95.00th=[ 241], | 99.00th=[ 279], 99.50th=[ 292], 99.90th=[ 321], 99.95th=[ 321], | 99.99th=[ 321] bw ( KiB/s): min= 191, max= 344, per=24.80%, avg=257.65, stdev=38.69, samples=40 iops : min= 47, max= 86, avg=64.30, stdev= 9.67, samples=40 lat (msec) : 4=0.04%, 20=0.39%, 50=0.35%, 100=42.85%, 250=54.49% lat (msec) : 500=1.88% cpu : usr=0.30%, sys=2.05%, ctx=3496, majf=0, minf=33 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=99.4%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=1262,1298,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=16 rand-rw: (groupid=0, jobs=1): err= 0: pid=32622: Thu Apr 18 22:36:22 2024 read: IOPS=61, BW=248KiB/s (254kB/s)(4976KiB/20090msec) slat (usec): min=778, max=3790, avg=1550.70, stdev=365.37 clat (msec): min=2, max=132, avg=81.69, stdev=16.53 lat (msec): min=3, max=134, avg=83.24, stdev=16.60 clat percentiles (msec): | 1.00th=[ 44], 5.00th=[ 59], 10.00th=[ 64], 20.00th=[ 70], | 30.00th=[ 73], 40.00th=[ 78], 50.00th=[ 82], 60.00th=[ 85], | 70.00th=[ 90], 80.00th=[ 95], 90.00th=[ 103], 95.00th=[ 109], | 99.00th=[ 128], 99.50th=[ 130], 99.90th=[ 132], 99.95th=[ 132], | 99.99th=[ 132] bw ( KiB/s): min= 96, max= 352, per=24.69%, avg=246.43, stdev=54.10, samples=40 iops : min= 24, max= 88, avg=61.48, stdev=13.50, samples=40 write: IOPS=65, BW=262KiB/s (268kB/s)(5264KiB/20090msec) slat (usec): min=13, max=580, avg=87.33, stdev=47.88 clat (msec): min=44, max=324, avg=165.19, stdev=40.22 lat (msec): min=44, max=325, avg=165.28, stdev=40.22 clat percentiles (msec): | 1.00th=[ 92], 5.00th=[ 107], 10.00th=[ 117], 20.00th=[ 131], | 30.00th=[ 144], 40.00th=[ 153], 50.00th=[ 163], 60.00th=[ 174], | 70.00th=[ 182], 80.00th=[ 197], 90.00th=[ 218], 95.00th=[ 243], | 99.00th=[ 275], 99.50th=[ 288], 99.90th=[ 321], 99.95th=[ 326], | 99.99th=[ 326] bw ( KiB/s): min= 168, max= 359, per=25.16%, avg=261.43, stdev=42.37, samples=40 iops : min= 42, max= 89, avg=65.22, stdev=10.59, samples=40 lat (msec) : 4=0.12%, 20=0.31%, 50=0.43%, 100=43.40%, 250=53.67% lat (msec) : 500=2.07% cpu : usr=0.29%, sys=2.03%, ctx=3485, majf=0, minf=31 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=99.4%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=1244,1316,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=16 rand-rw: (groupid=0, jobs=1): err= 0: pid=32623: Thu Apr 18 22:36:22 2024 read: IOPS=62, BW=248KiB/s (254kB/s)(4984KiB/20090msec) slat (usec): min=746, max=3637, avg=1531.58, stdev=352.06 clat (usec): min=1926, max=132838, avg=82204.54, stdev=16469.91 lat (msec): min=3, max=134, avg=83.74, stdev=16.53 clat percentiles (msec): | 1.00th=[ 47], 5.00th=[ 58], 10.00th=[ 63], 20.00th=[ 70], | 30.00th=[ 74], 40.00th=[ 79], 50.00th=[ 83], 60.00th=[ 86], | 70.00th=[ 90], 80.00th=[ 95], 90.00th=[ 103], 95.00th=[ 110], | 99.00th=[ 126], 99.50th=[ 130], 99.90th=[ 133], 99.95th=[ 133], | 99.99th=[ 133] bw ( KiB/s): min= 160, max= 344, per=24.81%, avg=247.65, stdev=45.52, samples=40 iops : min= 40, max= 86, avg=61.80, stdev=11.38, samples=40 write: IOPS=65, BW=262KiB/s (268kB/s)(5256KiB/20090msec) slat (usec): min=15, max=575, avg=88.19, stdev=50.83 clat (msec): min=68, max=325, avg=164.74, stdev=39.01 lat (msec): min=68, max=325, avg=164.83, stdev=39.00 clat percentiles (msec): | 1.00th=[ 93], 5.00th=[ 108], 10.00th=[ 117], 20.00th=[ 133], | 30.00th=[ 144], 40.00th=[ 153], 50.00th=[ 161], 60.00th=[ 171], | 70.00th=[ 182], 80.00th=[ 194], 90.00th=[ 215], 95.00th=[ 232], | 99.00th=[ 275], 99.50th=[ 292], 99.90th=[ 321], 99.95th=[ 326], | 99.99th=[ 326] bw ( KiB/s): min= 144, max= 336, per=25.01%, avg=259.85, stdev=45.31, samples=40 iops : min= 36, max= 84, avg=64.85, stdev=11.34, samples=40 lat (msec) : 2=0.04%, 20=0.27%, 50=0.39%, 100=42.93%, 250=54.69% lat (msec) : 500=1.68% cpu : usr=0.27%, sys=2.07%, ctx=3372, majf=0, minf=31 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=99.4%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=1246,1314,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=16 rand-rw: (groupid=0, jobs=1): err= 0: pid=32624: Thu Apr 18 22:36:22 2024 read: IOPS=63, BW=253KiB/s (259kB/s)(5060KiB/19988msec) slat (usec): min=30, max=3710, avg=1524.48, stdev=372.05 clat (usec): min=1943, max=161942, avg=81724.03, stdev=16367.76 lat (msec): min=2, max=163, avg=83.25, stdev=16.47 clat percentiles (msec): | 1.00th=[ 44], 5.00th=[ 59], 10.00th=[ 64], 20.00th=[ 70], | 30.00th=[ 73], 40.00th=[ 78], 50.00th=[ 82], 60.00th=[ 85], | 70.00th=[ 90], 80.00th=[ 94], 90.00th=[ 104], 95.00th=[ 109], | 99.00th=[ 124], 99.50th=[ 128], 99.90th=[ 134], 99.95th=[ 163], | 99.99th=[ 163] bw ( KiB/s): min= 112, max= 408, per=25.41%, avg=253.62, stdev=57.64, samples=39 iops : min= 28, max= 102, avg=63.31, stdev=14.46, samples=39 write: IOPS=64, BW=259KiB/s (265kB/s)(5180KiB/19988msec) slat (usec): min=13, max=433, avg=85.40, stdev=45.98 clat (msec): min=33, max=321, avg=165.38, stdev=41.16 lat (msec): min=34, max=321, avg=165.46, stdev=41.15 clat percentiles (msec): | 1.00th=[ 84], 5.00th=[ 106], 10.00th=[ 117], 20.00th=[ 133], | 30.00th=[ 144], 40.00th=[ 153], 50.00th=[ 163], 60.00th=[ 171], | 70.00th=[ 182], 80.00th=[ 194], 90.00th=[ 218], 95.00th=[ 245], | 99.00th=[ 288], 99.50th=[ 296], 99.90th=[ 309], 99.95th=[ 321], | 99.99th=[ 321] bw ( KiB/s): min= 160, max= 344, per=24.78%, avg=257.51, stdev=42.07, samples=39 iops : min= 40, max= 86, avg=64.28, stdev=10.50, samples=39 lat (msec) : 2=0.04%, 4=0.04%, 10=0.12%, 20=0.16%, 50=0.35% lat (msec) : 100=44.14%, 250=52.93%, 500=2.23% cpu : usr=0.29%, sys=2.07%, ctx=3320, majf=0, minf=30 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=99.4%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=1265,1295,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=16 Run status group 0 (all jobs): READ: bw=999KiB/s (1023kB/s), 248KiB/s-253KiB/s (254kB/s-259kB/s), io=19.6MiB (20.5MB), run=19988-20090msec WRITE: bw=1040KiB/s (1065kB/s), 258KiB/s-262KiB/s (265kB/s-268kB/s), io=20.4MiB (21.4MB), run=19988-20090msec AIO with large block size 40M rand-rw: (g=0): rw=randrw, bs=(R) 40.0MiB-40.0MiB, (W) 40.0MiB-40.0MiB, (T) 40.0MiB-40.0MiB, ioengine=libaio, iodepth=16 fio-3.7 Starting 1 process rand-rw: (groupid=0, jobs=1): err= 0: pid=32644: Thu Apr 18 22:36:23 2024 read: IOPS=5, BW=208MiB/s (218MB/s)(40.0MiB/192msec) slat (nsec): min=5976.2k, max=5976.2k, avg=5976230.00, stdev= 0.00 clat (nsec): min=179606k, max=179606k, avg=179606447.00, stdev= 0.00 lat (nsec): min=185592k, max=185592k, avg=185592222.00, stdev= 0.00 clat percentiles (msec): | 1.00th=[ 180], 5.00th=[ 180], 10.00th=[ 180], 20.00th=[ 180], | 30.00th=[ 180], 40.00th=[ 180], 50.00th=[ 180], 60.00th=[ 180], | 70.00th=[ 180], 80.00th=[ 180], 90.00th=[ 180], 95.00th=[ 180], | 99.00th=[ 180], 99.50th=[ 180], 99.90th=[ 180], 99.95th=[ 180], | 99.99th=[ 180] lat (msec) : 250=100.00% cpu : usr=0.00%, sys=2.62%, ctx=17, majf=0, minf=29 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=1,0,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=16 Run status group 0 (all jobs): READ: bw=208MiB/s (218MB/s), 208MiB/s-208MiB/s (218MB/s-218MB/s), io=40.0MiB (41.9MB), run=192-192msec debug=trace inode super iotrace malloc cache info ioctl neterror net warning buffs other dentry nettrace page dlmtrace error emerg ha rpctrace vfstrace reada mmap config console quota sec lfsck hsm snapshot layout PASS 398c (61s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 398d: run aiocp to verify block size > stripe size ========================================================== 22:36:28 (1713494188) /home/green/git/lustre-release/lustre/tests/aiocp 64+0 records in 64+0 records out 67108864 bytes (67 MB) copied, 1.95227 s, 34.4 MB/s PASS 398d (12s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 398e: O_Direct open cleared by fcntl doesn't cause hang ========================================================== 22:36:42 (1713494202) 1+0 records in 1+0 records out 1234 bytes (1.2 kB) copied, 0.00784283 s, 157 kB/s 0+1 records in 0+1 records out 1234 bytes (1.2 kB) copied, 0.0486051 s, 25.4 kB/s PASS 398e (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 398f: verify aio handles ll_direct_rw_pages errors correctly ========================================================== 22:36:47 (1713494207) /home/green/git/lustre-release/lustre/tests/aiocp 64+0 records in 64+0 records out 67108864 bytes (67 MB) copied, 1.65985 s, 40.4 MB/s fail_loc=0x1418 read missed bytes at 0 expected 67108864 got -12 fail_loc=0 Binary files /mnt/lustre/f398f.sanity and /mnt/lustre/f398f.sanity.aio differ PASS 398f (5s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 398g: verify parallel dio async RPC submission ========================================================== 22:36:54 (1713494214) 1+0 records in 1+0 records out 8388608 bytes (8.4 MB) copied, 0.204209 s, 41.1 MB/s osc.lustre-OST0000-osc-ffff88012c001800.max_pages_per_rpc=1M fail_loc=0x214 fail_val=2 osc.lustre-OST0000-osc-ffff88012c001800.rpc_stats=c osc.lustre-OST0001-osc-ffff88012c001800.rpc_stats=c 1+0 records in 1+0 records out 8388608 bytes (8.4 MB) copied, 2.30677 s, 3.6 MB/s osc.lustre-OST0000-osc-ffff88012c001800.rpc_stats= snapshot_time: 1713494218.134851097 secs.nsecs start_time: 1713494215.784118704 secs.nsecs elapsed_time: 2.350732393 secs.nsecs read RPCs in flight: 0 write RPCs in flight: 0 pending write pages: 1 pending read pages: 0 read write pages per rpc rpcs % cum % | rpcs % cum % 1: 0 0 0 | 0 0 0 2: 0 0 0 | 0 0 0 4: 0 0 0 | 0 0 0 8: 0 0 0 | 0 0 0 16: 0 0 0 | 0 0 0 32: 0 0 0 | 0 0 0 64: 0 0 0 | 0 0 0 128: 0 0 0 | 0 0 0 256: 0 0 0 | 8 100 100 read write rpcs in flight rpcs % cum % | rpcs % cum % 1: 0 0 0 | 1 12 12 2: 0 0 0 | 1 12 25 3: 0 0 0 | 1 12 37 4: 0 0 0 | 1 12 50 5: 0 0 0 | 1 12 62 6: 0 0 0 | 1 12 75 7: 0 0 0 | 1 12 87 8: 0 0 0 | 1 12 100 read write offset rpcs % cum % | rpcs % cum % 0: 0 0 0 | 2 25 25 1: 0 0 0 | 0 0 25 2: 0 0 0 | 0 0 25 4: 0 0 0 | 0 0 25 8: 0 0 0 | 0 0 25 16: 0 0 0 | 0 0 25 32: 0 0 0 | 0 0 25 64: 0 0 0 | 0 0 25 128: 0 0 0 | 0 0 25 256: 0 0 0 | 2 25 50 512: 0 0 0 | 4 50 100 osc.lustre-OST0000-osc-ffff88012c001800.rpc_stats=c osc.lustre-OST0001-osc-ffff88012c001800.rpc_stats=c llite.lustre-ffff88012c001800.parallel_dio=0 1+0 records in 1+0 records out 8388608 bytes (8.4 MB) copied, 16.99 s, 494 kB/s osc.lustre-OST0000-osc-ffff88012c001800.rpc_stats= snapshot_time: 1713494235.196600206 secs.nsecs start_time: 1713494218.155432192 secs.nsecs elapsed_time: 17.041168014 secs.nsecs read RPCs in flight: 0 write RPCs in flight: 1 pending write pages: 0 pending read pages: 0 read write pages per rpc rpcs % cum % | rpcs % cum % 1: 0 0 0 | 1 11 11 2: 0 0 0 | 0 0 11 4: 0 0 0 | 0 0 11 8: 0 0 0 | 0 0 11 16: 0 0 0 | 0 0 11 32: 0 0 0 | 0 0 11 64: 0 0 0 | 0 0 11 128: 0 0 0 | 0 0 11 256: 0 0 0 | 8 88 100 read write rpcs in flight rpcs % cum % | rpcs % cum % 1: 0 0 0 | 8 88 88 2: 0 0 0 | 1 11 100 read write offset rpcs % cum % | rpcs % cum % 0: 0 0 0 | 3 33 33 1: 0 0 0 | 0 0 33 2: 0 0 0 | 0 0 33 4: 0 0 0 | 0 0 33 8: 0 0 0 | 0 0 33 16: 0 0 0 | 0 0 33 32: 0 0 0 | 0 0 33 64: 0 0 0 | 0 0 33 128: 0 0 0 | 0 0 33 256: 0 0 0 | 2 22 55 512: 0 0 0 | 4 44 100 llite.lustre-ffff88012c001800.parallel_dio=1 fail_loc=0 PASS 398g (23s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 398h: verify correctness of read & write with i/o size >> stripe size ========================================================== 22:37:19 (1713494239) 8+0 records in 8+0 records out 67108864 bytes (67 MB) copied, 1.57177 s, 42.7 MB/s 8+0 records in 8+0 records out 67108864 bytes (67 MB) copied, 2.07287 s, 32.4 MB/s PASS 398h (10s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 398i: verify parallel dio handles ll_direct_rw_pages errors correctly ========================================================== 22:37:31 (1713494251) 8+0 records in 8+0 records out 67108864 bytes (67 MB) copied, 1.63342 s, 41.1 MB/s fail_loc=0x1418 dd: error reading '/mnt/lustre/f398i.sanity': Cannot allocate memory 0+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0146974 s, 0.0 kB/s diff: /mnt/lustre/f398i.sanity: Cannot allocate memory PASS 398i (5s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 398j: test parallel dio where stripe size > rpc_size ========================================================== 22:37:38 (1713494258) osc.lustre-OST0000-osc-ffff88012c001800.max_pages_per_rpc=1M 8+0 records in 8+0 records out 67108864 bytes (67 MB) copied, 1.83227 s, 36.6 MB/s 8+0 records in 8+0 records out 67108864 bytes (67 MB) copied, 2.54025 s, 26.4 MB/s PASS 398j (9s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 398k: test enospc on first stripe ========= 22:37:49 (1713494269) sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete Waiting for orphan cleanup... osp.lustre-OST0000-osc-MDT0000.old_sync_processed osp.lustre-OST0001-osc-MDT0000.old_sync_processed wait 40 secs maximumly for oleg216-server mds-ost sync done. SKIP: sanity test_398k 7497728 > 600000 skipping out-of-space test on OST0 SKIP 398k (17s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 398l: test enospc on intermediate stripe/RPC ========================================================== 22:38:08 (1713494288) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Waiting for orphan cleanup... osp.lustre-OST0000-osc-MDT0000.old_sync_processed osp.lustre-OST0001-osc-MDT0000.old_sync_processed wait 40 secs maximumly for oleg216-server mds-ost sync done. 2+0 records in 2+0 records out 16777216 bytes (17 MB) copied, 0.310734 s, 54.0 MB/s SKIP: sanity test_398l 7483392 > 600000 skipping out-of-space test on OST0 SKIP 398l (10s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 398m: test RPC failures with parallel dio ========================================================== 22:38:20 (1713494300) fail_loc=0x20e fail_val=1 dd: error writing '/mnt/lustre/f398m.sanity': Input/output error 1+0 records in 0+0 records out 0 bytes (0 B) copied, 54.3541 s, 0.0 kB/s fail_loc=0 fail_val=0 8+0 records in 8+0 records out 67108864 bytes (67 MB) copied, 1.67885 s, 40.0 MB/s fail_loc=0x20f fail_val=1 dd: error reading '/mnt/lustre/f398m.sanity': Input/output error 0+0 records in 0+0 records out 0 bytes (0 B) copied, 54.6826 s, 0.0 kB/s fail_loc=0 fail_val=0 fail_loc=0x20e fail_val=2 dd: error writing '/mnt/lustre/f398m.sanity': Input/output error 1+0 records in 0+0 records out 0 bytes (0 B) copied, 55.2977 s, 0.0 kB/s fail_loc=0 fail_val=0 8+0 records in 8+0 records out 67108864 bytes (67 MB) copied, 1.44617 s, 46.4 MB/s fail_loc=0x20f fail_val=2 dd: error reading '/mnt/lustre/f398m.sanity': Input/output error 0+0 records in 0+0 records out 0 bytes (0 B) copied, 55.1477 s, 0.0 kB/s fail_loc=0 fail_val=0 fail_loc=0 fail_loc=0 PASS 398m (227s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 398n: test append with parallel DIO ======= 22:42:10 (1713494530) 8+0 records in 8+0 records out 67108864 bytes (67 MB) copied, 2.07618 s, 32.3 MB/s 8+0 records in 8+0 records out 67108864 bytes (67 MB) copied, 1.25831 s, 53.3 MB/s PASS 398n (8s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 398o: right kms with DIO ================== 22:42:19 (1713494539) directio on /mnt/lustre/f398o.sanity for 1x1 bytes PASS PASS 398o (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 398p: race aio with buffered i/o ========== 22:42:23 (1713494543) /home/green/git/lustre-release/lustre/tests/aiocp 1+0 records in 1+0 records out 26214400 bytes (26 MB) copied, 0.85289 s, 30.7 MB/s bs: 4096, file_size 26214400 3200+0 records in 3200+0 records out 26214400 bytes (26 MB) copied, 5.02638 s, 5.2 MB/s /mnt/lustre/f398p.sanity.2 has type file OK /mnt/lustre/f398p.sanity.2 has size 26214400 OK bs: 16384, file_size 26214400 800+0 records in 800+0 records out 26214400 bytes (26 MB) copied, 1.45136 s, 18.1 MB/s /mnt/lustre/f398p.sanity.2 has type file OK /mnt/lustre/f398p.sanity.2 has size 26214400 OK bs: 1048576, file_size 26214400 12+1 records in 12+1 records out 26214400 bytes (26 MB) copied, 0.794684 s, 33.0 MB/s /mnt/lustre/f398p.sanity.2 has type file OK /mnt/lustre/f398p.sanity.2 has size 26214400 OK bs: 4194304, file_size 26214400 3+1 records in 3+1 records out 26214400 bytes (26 MB) copied, 0.742034 s, 35.3 MB/s /mnt/lustre/f398p.sanity.2 has type file OK /mnt/lustre/f398p.sanity.2 has size 26214400 OK PASS 398p (21s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 398q: race dio with buffered i/o ========== 22:42:46 (1713494566) 1+0 records in 1+0 records out 26214400 bytes (26 MB) copied, 0.866944 s, 30.2 MB/s bs: 4096, file_size 26214400 3200+0 records in 3200+0 records out 26214400 bytes (26 MB) copied, 4.04366 s, 6.5 MB/s 3200+0 records in 3200+0 records out 26214400 bytes (26 MB) copied, 55.993 s, 468 kB/s /mnt/lustre/f398q.sanity.2 has type file OK /mnt/lustre/f398q.sanity.2 has size 26214400 OK bs: 16384, file_size 26214400 800+0 records in 800+0 records out 26214400 bytes (26 MB) copied, 1.4237 s, 18.4 MB/s 800+0 records in 800+0 records out 26214400 bytes (26 MB) copied, 16.7395 s, 1.6 MB/s /mnt/lustre/f398q.sanity.2 has type file OK /mnt/lustre/f398q.sanity.2 has size 26214400 OK bs: 1048576, file_size 26214400 12+1 records in 12+1 records out 26214400 bytes (26 MB) copied, 1.01522 s, 25.8 MB/s 12+1 records in 12+1 records out 26214400 bytes (26 MB) copied, 1.52449 s, 17.2 MB/s /mnt/lustre/f398q.sanity.2 has type file OK /mnt/lustre/f398q.sanity.2 has size 26214400 OK bs: 4194304, file_size 26214400 3+1 records in 3+1 records out 26214400 bytes (26 MB) copied, 0.882814 s, 29.7 MB/s 3+1 records in 3+1 records out 26214400 bytes (26 MB) copied, 1.13266 s, 23.1 MB/s /mnt/lustre/f398q.sanity.2 has type file OK /mnt/lustre/f398q.sanity.2 has size 26214400 OK PASS 398q (82s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 398r: i/o error on file read ============== 22:44:10 (1713494650) fail_loc=0x20f cat: /mnt/lustre/f398r.sanity: Input/output error PASS 398r (58s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 398s: i/o error on mirror file read ======= 22:45:10 (1713494710) fail_loc=0x20f cat: /mnt/lustre/f398s.sanity: Input/output error PASS 398s (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 399a: fake write should not be slower than normal write ========================================================== 22:45:15 (1713494715) debug=0 1000+0 records in 1000+0 records out 1048576000 bytes (1.0 GB) copied, 23.4212 s, 44.8 MB/s fail_loc=0x238 1000+0 records in 1000+0 records out 1048576000 bytes (1.0 GB) copied, 26.5727 s, 39.5 MB/s /mnt/lustre/f399a.sanity has type file OK /mnt/lustre/f399a.sanity has size 1048576000 OK fail_loc=0 fake write 26.589110005 vs. normal write 23.444490820 in seconds running in VM 'kvm', ignore error sanity test_399a: @@@@@@ IGNORE (env=kvm): fake write is slower Trace dump: = /home/green/git/lustre-release/lustre/tests/test-framework.sh:7027:error_ignore() = /home/green/git/lustre-release/lustre/tests/test-framework.sh:7042:error_not_in_vm() = /home/green/git/lustre-release/lustre/tests/sanity.sh:28110:test_fake_rw() = /home/green/git/lustre-release/lustre/tests/sanity.sh:28118:test_399a() = /home/green/git/lustre-release/lustre/tests/test-framework.sh:7351:run_one() = /home/green/git/lustre-release/lustre/tests/test-framework.sh:7411:run_one_logged() = /home/green/git/lustre-release/lustre/tests/test-framework.sh:7222:run_test() = /home/green/git/lustre-release/lustre/tests/sanity.sh:28120:main() Dumping lctl log to /tmp/testlogs//sanity.test_399a.*.1713494768.log rsync: chown "/tmp/testlogs/.sanity.test_399a.debug_log.oleg216-server.1713494768.log.1EEmMY" failed: Operation not permitted (1) rsync: chown "/tmp/testlogs/.sanity.test_399a.dmesg.oleg216-server.1713494768.log.6r8JEJ" failed: Operation not permitted (1) rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1651) [generator=3.1.2] PASS 399a (56s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 399b: fake read should not be slower than normal read ========================================================== 22:46:13 (1713494773) SKIP: sanity test_399b ldiskfs only test SKIP 399b (1s) debug_raw_pointers=0 debug_raw_pointers=0 SKIP: sanity test_400a skipping excluded test 400a debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 400b: packaged headers can be compiled ==== 22:46:16 (1713494776) PASS 400b (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 401a: Verify if 'lctl list_param -R' can list parameters recursively ========================================================== 22:46:20 (1713494780) proc_dirs='/proc/fs/lustre/ /sys/fs/lustre/ /sys/kernel/debug/lnet/ /sys/kernel/debug/lustre/' PASS 401a (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 401b: Verify 'lctl {get,set}_param' continue after error ========================================================== 22:46:25 (1713494785) error: set_param: param_path 'foo': No such file or directory error: set_param: setting 'foo'='bar': No such file or directory jobid_name=testing%p error: set_param: param_path 'bar': No such file or directory error: set_param: setting 'bar'='baz': No such file or directory error: get_param: param_path 'foe': No such file or directory error: get_param: param_path 'baz': No such file or directory error: set_param: param_path 'fog': No such file or directory error: set_param: setting 'fog'='bam': No such file or directory error: set_param: param_path 'bat': No such file or directory error: set_param: setting 'bat'='fog': No such file or directory error: get_param: param_path 'foe': No such file or directory error: get_param: param_path 'bag': No such file or directory PASS 401b (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 401c: Verify 'lctl set_param' without value fails in either format. ========================================================== 22:46:30 (1713494790) error: set_param: setting jobid_name: Invalid argument error: set_param: setting jobid_name: Invalid argument PASS 401c (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 401d: Verify 'lctl set_param' accepts values containing '=' ========================================================== 22:46:34 (1713494794) jobid_name=foo=bar%p jobid_name=%e.%u jobid_name=foo=bar%p jobid_name=%e.%u PASS 401d (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 401e: verify 'lctl get_param' works with NID in parameter ========================================================== 22:46:37 (1713494797) ldlm.namespaces.MGC192.168.202.116@tcp ldlm.namespaces.MGC192.168.202.116@tcp.contended_locks ldlm.namespaces.MGC192.168.202.116@tcp.contention_seconds ldlm.namespaces.MGC192.168.202.116@tcp.ctime_age_limit ldlm.namespaces.MGC192.168.202.116@tcp.dirty_age_limit ldlm.namespaces.MGC192.168.202.116@tcp.early_lock_cancel ldlm.namespaces.MGC192.168.202.116@tcp.lock_count ldlm.namespaces.MGC192.168.202.116@tcp.lock_timeouts ldlm.namespaces.MGC192.168.202.116@tcp.lock_unused_count ldlm.namespaces.MGC192.168.202.116@tcp.lru_cancel_batch ldlm.namespaces.MGC192.168.202.116@tcp.lru_max_age ldlm.namespaces.MGC192.168.202.116@tcp.lru_size ldlm.namespaces.MGC192.168.202.116@tcp.max_nolock_bytes ldlm.namespaces.MGC192.168.202.116@tcp.max_parallel_ast ldlm.namespaces.MGC192.168.202.116@tcp.ns_recalc_pct ldlm.namespaces.MGC192.168.202.116@tcp.pool ldlm.namespaces.MGC192.168.202.116@tcp.pool.cancel_rate ldlm.namespaces.MGC192.168.202.116@tcp.pool.client_lock_volume ldlm.namespaces.MGC192.168.202.116@tcp.pool.grant_plan ldlm.namespaces.MGC192.168.202.116@tcp.pool.grant_rate ldlm.namespaces.MGC192.168.202.116@tcp.pool.grant_speed ldlm.namespaces.MGC192.168.202.116@tcp.pool.granted ldlm.namespaces.MGC192.168.202.116@tcp.pool.limit ldlm.namespaces.MGC192.168.202.116@tcp.pool.lock_volume_factor ldlm.namespaces.MGC192.168.202.116@tcp.pool.recalc_period ldlm.namespaces.MGC192.168.202.116@tcp.pool.recalc_time ldlm.namespaces.MGC192.168.202.116@tcp.pool.server_lock_volume ldlm.namespaces.MGC192.168.202.116@tcp.pool.state ldlm.namespaces.MGC192.168.202.116@tcp.pool.stats ldlm.namespaces.MGC192.168.202.116@tcp.resource_count ldlm.namespaces.MGC192.168.202.116@tcp.lru_size=400 PASS 401e (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 402: Return ENOENT to lod_generate_and_set_lovea ========================================================== 22:46:41 (1713494801) fail_loc=0x8000015c touch: cannot touch '/mnt/lustre/d402.sanity/f402.sanity': No such file or directory Touch failed - OK PASS 402 (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 403: i_nlink should not drop to zero due to aliasing ========================================================== 22:46:45 (1713494805) fail_loc=0x80001409 vm.drop_caches = 2 PASS 403 (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 404: validate manual {de}activated works properly for OSPs ========================================================== 22:46:49 (1713494809) Deactivate: lustre-OST0000-osc-MDT0000 Activate: lustre-OST0000-osc-MDT0000 Deactivate: lustre-OST0001-osc-MDT0000 Activate: lustre-OST0001-osc-MDT0000 PASS 404 (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 405: Various layout swap lock tests ======= 22:46:54 (1713494814) SKIP: sanity test_405 layout swap does not support DOM files so far SKIP 405 (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 406: DNE support fs default striping ====== 22:47:00 (1713494820) SKIP: sanity test_406 needs >= 2 MDTs SKIP 406 (0s) debug_raw_pointers=0 debug_raw_pointers=0 SKIP: sanity test_407 skipping ALWAYS excluded test 407 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 408: drop_caches should not hang due to page leaks ========================================================== 22:47:03 (1713494823) 1+0 records in 1+0 records out 4096 bytes (4.1 kB) copied, 0.0305691 s, 134 kB/s fail_loc=0x8000040a dd: error writing '/mnt/lustre/f408.sanity': Invalid argument 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.00996072 s, 0.0 kB/s PASS 408 (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 409: Large amount of cross-MDTs hard links on the same file ========================================================== 22:47:09 (1713494829) SKIP: sanity test_409 needs >= 2 MDTs SKIP 409 (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 410: Test inode number returned from kernel thread ========================================================== 22:47:12 (1713494832) kunit/kinode options: 'run_id=16861 fname=/mnt/lustre/f410.sanity' PASS 410 (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 411a: Slab allocation error with cgroup does not LBUG ========================================================== 22:47:16 (1713494836) 100+0 records in 100+0 records out 104857600 bytes (105 MB) copied, 2.5567 s, 41.0 MB/s dd: error reading '/mnt/lustre/f411a.sanity': Bad address 144+0 records in 144+0 records out 73728 bytes (74 kB) copied, 0.105273 s, 700 kB/s cache 712704 rss 0 rss_huge 0 mapped_file 0 swap 0 pgpgin 581 pgpgout 407 pgfault 394 pgmajfault 12 inactive_anon 0 active_anon 0 inactive_file 516096 active_file 196608 unevictable 0 hierarchical_memory_limit 1048576 hierarchical_memsw_limit 9223372036854771712 total_cache 712704 total_rss 0 total_rss_huge 0 total_mapped_file 0 total_swap 0 total_pgpgin 0 total_pgpgout 0 total_pgfault 0 total_pgmajfault 0 total_inactive_anon 0 total_active_anon 0 total_inactive_file 516096 total_active_file 196608 total_unevictable 0 PASS 411a (5s) debug_raw_pointers=0 debug_raw_pointers=0 SKIP: sanity test_411b skipping ALWAYS excluded test 411b debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 412: mkdir on specific MDTs =============== 22:47:24 (1713494844) SKIP: sanity test_412 needs >= 2 MDTs SKIP 412 (0s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 413a: QoS mkdir with 'lfs mkdir -i -1' ==== 22:47:26 (1713494846) SKIP: sanity test_413a We need at least 2 MDTs for this test SKIP 413a (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 413b: QoS mkdir under dir whose default LMV starting MDT offset is -1 ========================================================== 22:47:29 (1713494849) SKIP: sanity test_413b We need at least 2 MDTs for this test SKIP 413b (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 413c: mkdir with default LMV max inherit rr ========================================================== 22:47:32 (1713494852) SKIP: sanity test_413c We need at least 2 MDTs for this test SKIP 413c (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 413d: inherit ROOT default LMV ============ 22:47:35 (1713494855) SKIP: sanity test_413d We need at least 2 MDTs for this test SKIP 413d (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 413e: check default max-inherit value ===== 22:47:38 (1713494858) SKIP: sanity test_413e We need at least 2 MDTs for this test SKIP 413e (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 413f: lfs getdirstripe -D list ROOT default LMV if it's not set on dir ========================================================== 22:47:41 (1713494861) SKIP: sanity test_413f We need at least 2 MDTs for this test SKIP 413f (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 413g: enforce ROOT default LMV on subdir mount ========================================================== 22:47:44 (1713494864) SKIP: sanity test_413g We need at least 2 MDTs for this test SKIP 413g (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 413h: don't stick to parent for round-robin dirs ========================================================== 22:47:47 (1713494867) SKIP: sanity test_413h We need at least 2 MDTs for this test SKIP 413h (0s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 413i: check default layout inheritance ==== 22:47:50 (1713494870) SKIP: sanity test_413i needs >= 2 MDTs SKIP 413i (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 413j: set default LMV by setxattr ========= 22:47:53 (1713494873) SKIP: sanity test_413j needs >= 2 MDTs SKIP 413j (0s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 413k: QoS mkdir exclude prefixes ========== 22:47:55 (1713494875) lmv.lustre-clilmv-ffff88012c001800.qos_exclude_prefixes=+abc:123:foo bar lmv.lustre-clilmv-ffff88012c001800.qos_exclude_prefixes=-abc:123:foo bar lmv.lustre-clilmv-ffff88012c001800.qos_exclude_prefixes=_temporary PASS 413k (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 413z: 413 test cleanup ==================== 22:48:00 (1713494880) ls: cannot access /mnt/lustre/d413*-fillmdt/*: No such file or directory PASS 413z (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 414: simulate ENOMEM in ptlrpc_register_bulk() ========================================================== 22:48:04 (1713494884) fail_loc=0x80000521 1+0 records in 1+0 records out 2097152 bytes (2.1 MB) copied, 0.202561 s, 10.4 MB/s PASS 414 (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 415: lock revoke is not missing =========== 22:48:08 (1713494888) total: 50 open/close in 0.45 seconds: 112.30 ops/second sleep 5 for ZFS zfs sleep 5 for ZFS zfs rename 50 files without 'touch' took 0 sec rename 50 files with 'touch' took 1 sec /home/green/git/lustre-release/lustre/tests/test-framework.sh: line 4697: 687 Killed ( while true; do touch $DIR/$tdir; done ) (wd: ~) - unlinked 0 (time 1713494909 ; total 0 ; last 0) total: 50 unlinks in 0 seconds: inf unlinks/second PASS 415 (21s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 416: transaction start failure won't cause system hung ========================================================== 22:48:32 (1713494912) fail_loc=0x19a lfs mkdir: dirstripe error on '/mnt/lustre/d416.sanity': Input/output error lfs setdirstripe: cannot create dir '/mnt/lustre/d416.sanity': Input/output error PASS 416 (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 417: disable remote dir, striped dir and dir migration ========================================================== 22:48:37 (1713494917) SKIP: sanity test_417 needs >= 2 MDTs SKIP 417 (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 418: df and lfs df outputs match ========== 22:48:40 (1713494920) sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete ldlm.namespaces.MGC192.168.202.116@tcp.lru_size=clear ldlm.namespaces.lustre-MDT0000-mdc-ffff88012c001800.lru_size=clear ldlm.namespaces.lustre-OST0000-osc-ffff88012c001800.lru_size=clear ldlm.namespaces.lustre-OST0001-osc-ffff88012c001800.lru_size=clear ldlm.namespaces.MGC192.168.202.116@tcp.lru_size=clear ldlm.namespaces.lustre-MDT0000-mdc-ffff88012c001800.lru_size=clear ldlm.namespaces.lustre-OST0000-osc-ffff88012c001800.lru_size=clear ldlm.namespaces.lustre-OST0001-osc-ffff88012c001800.lru_size=clear Creating a single file and testing ldlm.namespaces.MGC192.168.202.116@tcp.lru_size=clear ldlm.namespaces.lustre-MDT0000-mdc-ffff88012c001800.lru_size=clear ldlm.namespaces.lustre-OST0000-osc-ffff88012c001800.lru_size=clear ldlm.namespaces.lustre-OST0001-osc-ffff88012c001800.lru_size=clear ldlm.namespaces.MGC192.168.202.116@tcp.lru_size=clear ldlm.namespaces.lustre-MDT0000-mdc-ffff88012c001800.lru_size=clear ldlm.namespaces.lustre-OST0000-osc-ffff88012c001800.lru_size=clear ldlm.namespaces.lustre-OST0001-osc-ffff88012c001800.lru_size=clear Creating 3276 files and testing Writing 224 4K blocks and testing ldlm.namespaces.MGC192.168.202.116@tcp.lru_size=clear ldlm.namespaces.lustre-MDT0000-mdc-ffff88012c001800.lru_size=clear ldlm.namespaces.lustre-OST0000-osc-ffff88012c001800.lru_size=clear ldlm.namespaces.lustre-OST0001-osc-ffff88012c001800.lru_size=clear ldlm.namespaces.MGC192.168.202.116@tcp.lru_size=clear ldlm.namespaces.lustre-MDT0000-mdc-ffff88012c001800.lru_size=clear ldlm.namespaces.lustre-OST0000-osc-ffff88012c001800.lru_size=clear ldlm.namespaces.lustre-OST0001-osc-ffff88012c001800.lru_size=clear PASS 418 (41s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 419: Verify open file by name doesn't crash kernel ========================================================== 22:49:24 (1713494964) fail_loc=0x1410 fail_loc=0 PASS 419 (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 420: clear SGID bit on non-directories for non-members ========================================================== 22:49:29 (1713494969) drwxrwsrwt 2 0 0 512 Apr 18 22:49 /mnt/lustre/d420.sanity/testdir Succeed in opening file "/mnt/lustre/d420.sanity/testdir/testfile"(flags=O_RDONLY, mode=2755) -rwxr-xr-x 1 500 0 0 Apr 18 22:49 /mnt/lustre/d420.sanity/testdir/testfile PASS 420 (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 421a: simple rm by fid ==================== 22:49:34 (1713494974) total: 3 open/close in 0.02 seconds: 166.76 ops/second stat: cannot stat '/mnt/lustre/d421a.sanity/f1': No such file or directory stat: cannot stat '/mnt/lustre/d421a.sanity/f2': No such file or directory total: 3 open/close in 0.01 seconds: 218.87 ops/second remove using fsname lustre PASS 421a (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 421b: rm by fid on open file ============== 22:49:39 (1713494979) total: 3 open/close in 0.03 seconds: 91.17 ops/second multiop /mnt/lustre/d421b.sanity/f1 vo_c TMPPIPE=/tmp/multiop_open_wait_pipe.6927 lfs rmfid: cannot remove [0x200004281:0x1a06:0x0]: Device or resource busy PASS 421b (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 421c: rm by fid against hardlinked files == 22:49:43 (1713494983) total: 3 open/close in 0.02 seconds: 130.28 ops/second total: 180 link in 0.62 seconds: 291.60 ops/second PASS 421c (5s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 421d: rmfid en masse ====================== 22:49:50 (1713494990) - open/close 3591 (time 1713495001.06 total 10.00 last 359.09) total: 4097 open/close in 11.16 seconds: 367.23 ops/second PASS 421d (32s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 421e: rmfid in DNE ======================== 22:50:24 (1713495024) SKIP: sanity test_421e needs >= 2 MDTs SKIP 421e (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 421f: rmfid checks permissions ============ 22:50:27 (1713495027) running as uid/gid/euid/egid 500/500/500/500, groups: [/home/green/git/lustre-release/lustre/utils/lfs] [rmfid] [/mnt/lustre] [[0x200004281:0x2a10:0x0]] lfs rmfid: cannot remove FIDs: Operation not permitted total 293 drwxrwxrwx 2 root root 512 Apr 18 22:50 . drwxrwxrwx 149 root sanityusr 298496 Apr 18 22:50 .. -rw-r--r-- 1 root root 0 Apr 18 22:50 f running as uid/gid/euid/egid 500/500/500/500, groups: [/home/green/git/lustre-release/lustre/utils/lfs] [rmfid] [/mnt/lustre] [[0x200004281:0x2a10:0x0]] lfs rmfid: cannot remove FIDs: Operation not permitted running as uid/gid/euid/egid 500/500/500/500, groups: [touch] [/mnt/lustre/d421f.sanity/f] rmfid as root running as uid/gid/euid/egid 500/500/500/500, groups: [touch] [/mnt/lustre/d421f.sanity/f] running as uid/gid/euid/egid 500/500/500/500, groups: [/home/green/git/lustre-release/lustre/utils/lfs] [rmfid] [/mnt/lustre] [[0x200004281:0x2a12:0x0]] lfs rmfid: cannot remove FIDs: Operation not permitted Starting client: oleg216-client.virtnet: -o user_xattr,flock,user_fid2path oleg216-server@tcp:/lustre /tmp/lustre-iY4PGi running as uid/gid/euid/egid 500/500/500/500, groups: [/home/green/git/lustre-release/lustre/utils/lfs] [rmfid] [/tmp/lustre-iY4PGi] [[0x200004281:0x2a12:0x0]] total 293 drwxrwxrwx 2 root root 512 Apr 18 22:50 . drwxrwxrwx 149 root sanityusr 298496 Apr 18 22:50 .. -rw-r--r-- 1 root root 0 Apr 18 22:50 f running as uid/gid/euid/egid 500/500/500/500, groups: [/home/green/git/lustre-release/lustre/utils/lfs] [rmfid] [/tmp/lustre-iY4PGi] [[0x200004282:0x1:0x0]] lfs rmfid: cannot remove [0x200004282:0x1:0x0]: Permission denied 192.168.202.116@tcp:/lustre /tmp/lustre-iY4PGi lustre rw,checksum,flock,user_xattr,lruresize,lazystatfs,user_fid2path,verbose,noencrypt,statfs_project 0 0 Stopping client oleg216-client.virtnet /tmp/lustre-iY4PGi (opts:) PASS 421f (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 421g: rmfid to return errors properly ===== 22:50:33 (1713495033) SKIP: sanity test_421g needs >= 2 MDTs SKIP 421g (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 421h: rmfid with fileset mount ============ 22:50:36 (1713495036) File /mnt/lustre/d421h.sanity/subdir/file0 FID [0x200004281:0x2a15:0x0] File /mnt/lustre/d421h.sanity/subdir/fileA FID [0x200004281:0x2a16:0x0] File /mnt/lustre/d421h.sanity/subdir/fileB FID [0x200004281:0x2a17:0x0] File /mnt/lustre/d421h.sanity/subdir/fileC FID [0x200004281:0x2a18:0x0] File /mnt/lustre/d421h.sanity/fileD FID [0x200004281:0x2a19:0x0] Starting client: oleg216-client.virtnet: -o user_xattr,flock oleg216-server@tcp:/lustre/d421h.sanity/subdir /mnt/lustre_other Removing FIDs: /home/green/git/lustre-release/lustre/utils/lfs rmfid /mnt/lustre_other [0x200004281:0x2a15:0x0] [0x200004281:0x2a16:0x0] [0x200004281:0x2a19:0x0] [0x200004281:0x2a17:0x0] [0x200004281:0x2a18:0x0] lfs rmfid: cannot remove [0x200004281:0x2a18:0x0]: No such file or directory lfs rmfid: cannot remove [0x200004281:0x2a19:0x0]: No such file or directory lfs rmfid: cannot remove [0x200004281:0x2a15:0x0]: No such file or directory 192.168.202.116@tcp:/lustre/d421h.sanity/subdir /mnt/lustre_other lustre rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project 0 0 Stopping client oleg216-client.virtnet /mnt/lustre_other (opts:) stat: cannot stat '/mnt/lustre/d421h.sanity/subdir/fileA': No such file or directory stat: cannot stat '/mnt/lustre/d421h.sanity/subdir/fileB': No such file or directory File: '/mnt/lustre/d421h.sanity/subdir/fileC' Size: 0 Blocks: 1 IO Block: 4194304 regular empty file Device: 2c54f966h/743766374d Inode: 144115473707969048 Links: 2 Access: (0644/-rw-r--r--) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2024-04-18 22:50:37.000000000 -0400 Modify: 2024-04-18 22:50:37.000000000 -0400 Change: 2024-04-18 22:50:37.000000000 -0400 Birth: - File: '/mnt/lustre/d421h.sanity/fileD' Size: 0 Blocks: 1 IO Block: 4194304 regular empty file Device: 2c54f966h/743766374d Inode: 144115473707969049 Links: 1 Access: (0644/-rw-r--r--) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2024-04-18 22:50:37.000000000 -0400 Modify: 2024-04-18 22:50:37.000000000 -0400 Change: 2024-04-18 22:50:37.000000000 -0400 Birth: - PASS 421h (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 422: kill a process with RPC in progress == 22:50:41 (1713495041) 1+0 records in 1+0 records out 1024 bytes (1.0 kB) copied, 0.00645914 s, 159 kB/s 1+0 records in 1+0 records out 1024 bytes (1.0 kB) copied, 0.00556897 s, 184 kB/s at_max=0 at_max=0 fail_loc=0x8000050a fail_val=50000 fail_loc=0x80000722 fail_val=45 kill 11893 /home/green/git/lustre-release/lustre/tests/sanity.sh: line 30130: 11893 Killed mv $DIR/$tdir/d1/file1 $DIR/$tdir/d1/file2 at_max=600 at_max=600 [ 9392.502477] Lustre: mdt00_002: service thread pid 11421 was inactive for 40.074 seconds. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes: [ 9394.806546] Lustre: mdt_io00_002: service thread pid 11434 was inactive for 40.116 seconds. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes: PASS 422 (64s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 423: statfs should return a right data ==== 22:51:47 (1713495107) PASS 423 (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 424: simulate ENOMEM in ptl_send_rpc bulk reply ME attach ========================================================== 22:51:52 (1713495112) fail_loc=0x80000522 1+0 records in 1+0 records out 2097152 bytes (2.1 MB) copied, 0.137837 s, 15.2 MB/s PASS 424 (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 425: lock count should not exceed lru size ========================================================== 22:51:56 (1713495116) ldlm.namespaces.MGC192.168.202.116@tcp.lru_size=100 ldlm.namespaces.lustre-MDT0000-mdc-ffff88012c001800.lru_size=100 ldlm.namespaces.lustre-OST0000-osc-ffff88012c001800.lru_size=100 ldlm.namespaces.lustre-OST0001-osc-ffff88012c001800.lru_size=100 ldlm.namespaces.MGC192.168.202.116@tcp.lru_size=0 ldlm.namespaces.lustre-MDT0000-mdc-ffff88012c001800.lru_size=0 ldlm.namespaces.lustre-OST0000-osc-ffff88012c001800.lru_size=0 ldlm.namespaces.lustre-OST0001-osc-ffff88012c001800.lru_size=0 PASS 425 (8s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 426: splice test on Lustre ================ 22:52:06 (1713495126) splice-test: splice: Bad address concurrent reader with O_DIRECT read: /mnt/lustre/f426.sanity: unexpected EOF concurrent reader with O_DIRECT concurrent reader without O_DIRECT concurrent reader without O_DIRECT splice-test: splice: Bad address sequential reader with O_DIRECT sequential reader without O_DIRECT PASS 426 (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 427: Failed DNE2 update request shouldn't corrupt updatelog ========================================================== 22:52:10 (1713495130) SKIP: sanity test_427 needs >= 2 MDTs SKIP 427 (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 428: large block size IO should not hang == 22:52:13 (1713495133) 1+0 records in 1+0 records out 134217728 bytes (134 MB) copied, 24.8606 s, 5.4 MB/s 1+0 records in 1+0 records out 134217728 bytes (134 MB) copied, 25.4433 s, 5.3 MB/s 1+0 records in 1+0 records out 134217728 bytes (134 MB) copied, 26.0333 s, 5.2 MB/s 1+0 records in 1+0 records out 134217728 bytes (134 MB) copied, 29.363 s, 4.6 MB/s 1+0 records in 1+0 records out 134217728 bytes (134 MB) copied, 5.85183 s, 22.9 MB/s 1+0 records in 1+0 records out 134217728 bytes (134 MB) copied, 5.92797 s, 22.6 MB/s 1+0 records in 1+0 records out 134217728 bytes (134 MB) copied, 5.95097 s, 22.6 MB/s 1+0 records in 1+0 records out 134217728 bytes (134 MB) copied, 6.01032 s, 22.3 MB/s PASS 428 (39s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 429: verify if opencache flag on client side does work ========================================================== 22:52:53 (1713495173) llite.lustre-ffff88012c001800.opencache_threshold_count=5 mdc.lustre-MDT0000-mdc-ffff88012c001800.stats=clear 1st: 2 RPCs in flight 2nd: 2 RPCs in flight 3rd: 2 RPCs in flight PASS 429 (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 430a: lseek: SEEK_DATA/SEEK_HOLE basic functionality ========================================================== 22:52:57 (1713495177) SKIP: sanity test_430a MDT does not support SEEK_HOLE SKIP 430a (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 430b: lseek: SEEK_DATA/SEEK_HOLE special cases ========================================================== 22:53:00 (1713495180) SKIP: sanity test_430b OST does not support SEEK_HOLE SKIP 430b (0s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 430c: lseek: external tools check ========= 22:53:02 (1713495182) SKIP: sanity test_430c OST does not support SEEK_HOLE SKIP 430c (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 431: Restart transaction for IO =========== 22:53:05 (1713495185) 1+0 records in 1+0 records out 4096 bytes (4.1 kB) copied, 0.00279052 s, 1.5 MB/s 1+0 records in 1+0 records out 4096 bytes (4.1 kB) copied, 0.00262077 s, 1.6 MB/s 1+0 records in 1+0 records out 4096 bytes (4.1 kB) copied, 0.000912233 s, 4.5 MB/s fail_loc=0x251 PASS 431 (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 432: mv dir from outside Lustre =========== 22:53:10 (1713495190) On MGS 192.168.202.116, active = nodemap.active=1 waiting 10 secs for sync On MGS 192.168.202.116, default.admin_nodemap = nodemap.default.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.202.116, default.trusted_nodemap = nodemap.default.trusted_nodemap=1 waiting 10 secs for sync On MGS 192.168.202.116, active = nodemap.active=0 waiting 10 secs for sync PASS 432 (47s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 433: ldlm lock cancel releases dentries and inodes ========================================================== 22:53:59 (1713495239) llite.lustre-ffff88012c001800.inode_cache=0 total: 256 create in 0.48 seconds: 538.70 ops/second total: 256 mkdir in 0.56 seconds: 457.77 ops/second lustre_inode_cache 912 objs before lock cancel, 399 after llite.lustre-ffff88012c001800.inode_cache=1 PASS 433 (11s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 434: Client should not send RPCs for security.selinux with SElinux disabled ========================================================== 22:54:12 (1713495252) llite.lustre-ffff88012c001800.xattr_cache=0 PASS 434 (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 440: bash completion for lfs, lctl ======== 22:54:19 (1713495259) PASS 440 (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 442: truncate vs read/write should not panic ========================================================== 22:54:24 (1713495264) fail_loc=0x1430 PASS 442 (9s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 460d: Check encrypt pools output ========== 22:54:35 (1713495275) physical_pages: 955079 pools: PASS 460d (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 801a: write barrier user interfaces and stat machine ========================================================== 22:54:40 (1713495280) debug=-1 debug_mb=150 debug=-1 debug_mb=150 Start barrier_freeze at: Thu Apr 18 22:54:42 EDT 2024 fail_val=5 fail_loc=0x2202 Got barrier status at: Thu Apr 18 22:54:44 EDT 2024 fail_val=0 fail_loc=0 sleep 21 seconds, then the barrier will be expired Start barrier_thaw at: Thu Apr 18 22:55:07 EDT 2024 fail_val=5 fail_loc=0x2202 Got barrier status at: Thu Apr 18 22:55:10 EDT 2024 fail_val=0 fail_loc=0 fail_loc=0x2203 oleg216-server: Fail to freeze barrier for lustre: Object is remote pdsh@oleg216-client: oleg216-server: ssh exited with exit code 66 fail_loc=0 debug_mb=21 debug_mb=21 debug=trace inode super iotrace malloc cache info ioctl neterror net warning buffs other dentry nettrace page dlmtrace error emerg ha rpctrace vfstrace reada mmap config console quota sec lfsck hsm snapshot layout debug=trace inode super iotrace malloc cache info ioctl neterror net warning buffs other dentry nettrace page dlmtrace error emerg ha rpctrace vfstrace reada mmap config console quota sec lfsck hsm snapshot layout PASS 801a (35s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 801b: modification will be blocked by write barrier ========================================================== 22:55:18 (1713495318) debug=-1 debug_mb=150 debug=-1 debug_mb=150 total: 6 mkdir in 0.03 seconds: 190.96 ops/second File: '/mnt/lustre/d801b.sanity/d5' Size: 11776 Blocks: 23 IO Block: 1048576 directory Device: 2c54f966h/743766374d Inode: 144115473707969827 Links: 2 Access: (0755/drwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2024-04-18 22:55:20.000000000 -0400 Modify: 2024-04-18 22:55:20.000000000 -0400 Change: 2024-04-18 22:55:20.000000000 -0400 Birth: - PID TTY TIME CMD 26999 pts/0 00:00:00 mkdir PID TTY TIME CMD 27000 pts/0 00:00:00 touch PID TTY TIME CMD 27001 pts/0 00:00:00 ln PID TTY TIME CMD 27002 pts/0 00:00:00 mv PID TTY TIME CMD 27003 pts/0 00:00:00 rm debug_mb=21 debug_mb=21 PASS 801b (15s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 801c: rescan barrier bitmap =============== 22:55:36 (1713495336) SKIP: sanity test_801c needs >= 2 MDTs SKIP 801c (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 802b: be able to set MDTs to readonly ===== 22:55:40 (1713495340) mdt.lustre-MDT0000.readonly=0 mdt.lustre-MDT0000.readonly=1 Modify should be refused touch: cannot touch '/mnt/lustre/d802b.sanity/guard': Read-only file system Read should be allowed mdt.lustre-MDT0000.readonly=0 mdt.lustre-MDT0000.readonly=0 PASS 802b (5s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 803a: verify agent object for remote object ========================================================== 22:55:48 (1713495348) SKIP: sanity test_803a needs >= 2 MDTs SKIP 803a (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 803b: remote object can getattr from cache ========================================================== 22:55:52 (1713495352) SKIP: sanity test_803b needs >= 2 MDTs SKIP 803b (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 804: verify agent entry for remote entry == 22:55:56 (1713495356) SKIP: sanity test_804 needs >= 2 MDTs SKIP 804 (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 805: ZFS can remove from full fs ========== 22:56:00 (1713495360) - create 1388 (time 1713495373.45 total 10.01 last 138.72) - create 2759 (time 1713495383.45 total 20.01 last 137.10) mknod(/mnt/lustre/d805.sanity/f-3908) error: Disk quota exceeded total: 3908 create in 28.44 seconds: 137.43 ops/second PASS 805 (63s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 806: Verify Lazy Size on MDS ============== 22:57:04 (1713495424) Test SOM for single-threaded write 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0421731 s, 24.9 MB/s Test SOM for single client multi-threaded(32) write Test SOM for multi-client (1) writes Verify SOM block count Test SOM for truncate PASS 806 (9s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 807: verify LSOM syncing tool ============= 22:57:15 (1713495435) mdd.lustre-MDT0000.changelog_mask=+hsm Registered 1 changelog users: 'cl1' Test SOM for single-threaded write with fsync 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0484898 s, 21.6 MB/s Test SOM for multi-client (1) writes oleg216-client.virtnet: executing cancel_lru_locks osc Start to sync 3 records. lustre-MDT0000: clear the changelog for cl1 of all records lustre-MDT0000: Deregistered changelog user #1 lustre-MDT0000: changelog user 'cl1' not found PASS 807 (12s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 808: Check trusted.som xattr not logged in Changelogs ========================================================== 22:57:30 (1713495450) mdd.lustre-MDT0000.changelog_mask=+hsm Registered 1 changelog users: 'cl2' 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0491721 s, 21.3 MB/s lustre-MDT0000: clear the changelog for cl2 of all records lustre-MDT0000: Deregistered changelog user #2 lustre-MDT0000: changelog user 'cl2' not found PASS 808 (5s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 809: Verify no SOM xattr store for DoM-only files ========================================================== 22:57:38 (1713495458) /mnt/lustre/f809.sanity failed to get som xattr: No data available (61) 1+0 records in 1+0 records out 2048 bytes (2.0 kB) copied, 0.00500823 s, 409 kB/s /mnt/lustre/f809.sanity failed to get som xattr: No data available (61) /mnt/lustre/f809.sanity failed to get som xattr: No data available (61) /mnt/lustre/ failed to get som xattr: No data available (61) PASS 809 (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 810: partial page writes on ZFS (LU-11663) ========================================================== 22:57:43 (1713495463) osc.lustre-OST0000-osc-ffff88012c001800.checksum_type=crc32 osc.lustre-OST0001-osc-ffff88012c001800.checksum_type=crc32 fail_loc=0x411 2+0 records in 2+0 records out 20480 bytes (20 kB) copied, 0.0574544 s, 356 kB/s ldlm.namespaces.lustre-OST0000-osc-ffff88012c001800.lru_size=clear ldlm.namespaces.lustre-OST0001-osc-ffff88012c001800.lru_size=clear 2+0 records in 2+0 records out 20000 bytes (20 kB) copied, 0.0495054 s, 404 kB/s ldlm.namespaces.lustre-OST0000-osc-ffff88012c001800.lru_size=clear ldlm.namespaces.lustre-OST0001-osc-ffff88012c001800.lru_size=clear 2+0 records in 2+0 records out 8000 bytes (8.0 kB) copied, 0.0546661 s, 146 kB/s ldlm.namespaces.lustre-OST0000-osc-ffff88012c001800.lru_size=clear ldlm.namespaces.lustre-OST0001-osc-ffff88012c001800.lru_size=clear 2+0 records in 2+0 records out 1000 bytes (1.0 kB) copied, 0.0643061 s, 15.6 kB/s ldlm.namespaces.lustre-OST0000-osc-ffff88012c001800.lru_size=clear ldlm.namespaces.lustre-OST0001-osc-ffff88012c001800.lru_size=clear osc.lustre-OST0000-osc-ffff88012c001800.checksum_type=adler osc.lustre-OST0001-osc-ffff88012c001800.checksum_type=adler fail_loc=0x411 2+0 records in 2+0 records out 20480 bytes (20 kB) copied, 0.055012 s, 372 kB/s ldlm.namespaces.lustre-OST0000-osc-ffff88012c001800.lru_size=clear ldlm.namespaces.lustre-OST0001-osc-ffff88012c001800.lru_size=clear 2+0 records in 2+0 records out 20000 bytes (20 kB) copied, 0.0518744 s, 386 kB/s ldlm.namespaces.lustre-OST0000-osc-ffff88012c001800.lru_size=clear ldlm.namespaces.lustre-OST0001-osc-ffff88012c001800.lru_size=clear 2+0 records in 2+0 records out 8000 bytes (8.0 kB) copied, 0.0486356 s, 164 kB/s ldlm.namespaces.lustre-OST0000-osc-ffff88012c001800.lru_size=clear ldlm.namespaces.lustre-OST0001-osc-ffff88012c001800.lru_size=clear 2+0 records in 2+0 records out 1000 bytes (1.0 kB) copied, 0.0519832 s, 19.2 kB/s ldlm.namespaces.lustre-OST0000-osc-ffff88012c001800.lru_size=clear ldlm.namespaces.lustre-OST0001-osc-ffff88012c001800.lru_size=clear osc.lustre-OST0000-osc-ffff88012c001800.checksum_type=crc32c osc.lustre-OST0001-osc-ffff88012c001800.checksum_type=crc32c fail_loc=0x411 2+0 records in 2+0 records out 20480 bytes (20 kB) copied, 0.048864 s, 419 kB/s ldlm.namespaces.lustre-OST0000-osc-ffff88012c001800.lru_size=clear ldlm.namespaces.lustre-OST0001-osc-ffff88012c001800.lru_size=clear 2+0 records in 2+0 records out 20000 bytes (20 kB) copied, 0.0477575 s, 419 kB/s ldlm.namespaces.lustre-OST0000-osc-ffff88012c001800.lru_size=clear ldlm.namespaces.lustre-OST0001-osc-ffff88012c001800.lru_size=clear 2+0 records in 2+0 records out 8000 bytes (8.0 kB) copied, 0.0532682 s, 150 kB/s ldlm.namespaces.lustre-OST0000-osc-ffff88012c001800.lru_size=clear ldlm.namespaces.lustre-OST0001-osc-ffff88012c001800.lru_size=clear 2+0 records in 2+0 records out 1000 bytes (1.0 kB) copied, 0.0255032 s, 39.2 kB/s ldlm.namespaces.lustre-OST0000-osc-ffff88012c001800.lru_size=clear ldlm.namespaces.lustre-OST0001-osc-ffff88012c001800.lru_size=clear osc.lustre-OST0000-osc-ffff88012c001800.checksum_type=t10ip512 osc.lustre-OST0001-osc-ffff88012c001800.checksum_type=t10ip512 fail_loc=0x411 2+0 records in 2+0 records out 20480 bytes (20 kB) copied, 0.0323399 s, 633 kB/s ldlm.namespaces.lustre-OST0000-osc-ffff88012c001800.lru_size=clear ldlm.namespaces.lustre-OST0001-osc-ffff88012c001800.lru_size=clear 2+0 records in 2+0 records out 20000 bytes (20 kB) copied, 0.0336401 s, 595 kB/s ldlm.namespaces.lustre-OST0000-osc-ffff88012c001800.lru_size=clear ldlm.namespaces.lustre-OST0001-osc-ffff88012c001800.lru_size=clear 2+0 records in 2+0 records out 8000 bytes (8.0 kB) copied, 0.0273771 s, 292 kB/s ldlm.namespaces.lustre-OST0000-osc-ffff88012c001800.lru_size=clear ldlm.namespaces.lustre-OST0001-osc-ffff88012c001800.lru_size=clear 2+0 records in 2+0 records out 1000 bytes (1.0 kB) copied, 0.0249022 s, 40.2 kB/s ldlm.namespaces.lustre-OST0000-osc-ffff88012c001800.lru_size=clear ldlm.namespaces.lustre-OST0001-osc-ffff88012c001800.lru_size=clear osc.lustre-OST0000-osc-ffff88012c001800.checksum_type=t10ip4K osc.lustre-OST0001-osc-ffff88012c001800.checksum_type=t10ip4K fail_loc=0x411 2+0 records in 2+0 records out 20480 bytes (20 kB) copied, 0.0295725 s, 693 kB/s ldlm.namespaces.lustre-OST0000-osc-ffff88012c001800.lru_size=clear ldlm.namespaces.lustre-OST0001-osc-ffff88012c001800.lru_size=clear 2+0 records in 2+0 records out 20000 bytes (20 kB) copied, 0.0264242 s, 757 kB/s ldlm.namespaces.lustre-OST0000-osc-ffff88012c001800.lru_size=clear ldlm.namespaces.lustre-OST0001-osc-ffff88012c001800.lru_size=clear 2+0 records in 2+0 records out 8000 bytes (8.0 kB) copied, 0.0262122 s, 305 kB/s ldlm.namespaces.lustre-OST0000-osc-ffff88012c001800.lru_size=clear ldlm.namespaces.lustre-OST0001-osc-ffff88012c001800.lru_size=clear 2+0 records in 2+0 records out 1000 bytes (1.0 kB) copied, 0.0302794 s, 33.0 kB/s ldlm.namespaces.lustre-OST0000-osc-ffff88012c001800.lru_size=clear ldlm.namespaces.lustre-OST0001-osc-ffff88012c001800.lru_size=clear osc.lustre-OST0000-osc-ffff88012c001800.checksum_type=t10crc512 osc.lustre-OST0001-osc-ffff88012c001800.checksum_type=t10crc512 fail_loc=0x411 2+0 records in 2+0 records out 20480 bytes (20 kB) copied, 0.0265675 s, 771 kB/s ldlm.namespaces.lustre-OST0000-osc-ffff88012c001800.lru_size=clear ldlm.namespaces.lustre-OST0001-osc-ffff88012c001800.lru_size=clear 2+0 records in 2+0 records out 20000 bytes (20 kB) copied, 0.0271701 s, 736 kB/s ldlm.namespaces.lustre-OST0000-osc-ffff88012c001800.lru_size=clear ldlm.namespaces.lustre-OST0001-osc-ffff88012c001800.lru_size=clear 2+0 records in 2+0 records out 8000 bytes (8.0 kB) copied, 0.0302638 s, 264 kB/s ldlm.namespaces.lustre-OST0000-osc-ffff88012c001800.lru_size=clear ldlm.namespaces.lustre-OST0001-osc-ffff88012c001800.lru_size=clear 2+0 records in 2+0 records out 1000 bytes (1.0 kB) copied, 0.0312998 s, 31.9 kB/s ldlm.namespaces.lustre-OST0000-osc-ffff88012c001800.lru_size=clear ldlm.namespaces.lustre-OST0001-osc-ffff88012c001800.lru_size=clear osc.lustre-OST0000-osc-ffff88012c001800.checksum_type=t10crc4K osc.lustre-OST0001-osc-ffff88012c001800.checksum_type=t10crc4K fail_loc=0x411 2+0 records in 2+0 records out 20480 bytes (20 kB) copied, 0.0263291 s, 778 kB/s ldlm.namespaces.lustre-OST0000-osc-ffff88012c001800.lru_size=clear ldlm.namespaces.lustre-OST0001-osc-ffff88012c001800.lru_size=clear 2+0 records in 2+0 records out 20000 bytes (20 kB) copied, 0.0325808 s, 614 kB/s ldlm.namespaces.lustre-OST0000-osc-ffff88012c001800.lru_size=clear ldlm.namespaces.lustre-OST0001-osc-ffff88012c001800.lru_size=clear 2+0 records in 2+0 records out 8000 bytes (8.0 kB) copied, 0.0285364 s, 280 kB/s ldlm.namespaces.lustre-OST0000-osc-ffff88012c001800.lru_size=clear ldlm.namespaces.lustre-OST0001-osc-ffff88012c001800.lru_size=clear 2+0 records in 2+0 records out 1000 bytes (1.0 kB) copied, 0.0273 s, 36.6 kB/s ldlm.namespaces.lustre-OST0000-osc-ffff88012c001800.lru_size=clear ldlm.namespaces.lustre-OST0001-osc-ffff88012c001800.lru_size=clear set checksum type to crc32c, rc = 0 PASS 810 (5s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 812a: do not drop reqs generated when imp is going to idle (LU-11951) ========================================================== 22:57:51 (1713495471) osc.lustre-OST0000-osc-ffff88012c001800.idle_timeout=10 osc.lustre-OST0001-osc-ffff88012c001800.idle_timeout=10 oleg216-client.virtnet: executing wait_import_state FULL osc.lustre-OST0000-osc-ffff88012c001800.ost_server_uuid 50 osc.lustre-OST0000-osc-ffff88012c001800.ost_server_uuid in FULL state after 0 sec fail_loc=0x245 fail_val=8 oleg216-client.virtnet: executing wait_import_state CONNECTING osc.lustre-OST0000-osc-ffff88012c001800.ost_server_uuid 50 osc.lustre-OST0000-osc-ffff88012c001800.ost_server_uuid in CONNECTING state after 9 sec fail_loc=0 fail_val=0 osc.lustre-OST0000-osc-ffff88012c001800.idle_timeout=20 osc.lustre-OST0001-osc-ffff88012c001800.idle_timeout=20 PASS 812a (15s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 812b: do not drop no resend request for idle connect ========================================================== 22:58:09 (1713495489) osc.lustre-OST0000-osc-ffff88012c001800.idle_timeout=10 osc.lustre-OST0001-osc-ffff88012c001800.idle_timeout=10 oleg216-client.virtnet: executing wait_import_state FULL osc.lustre-OST0000-osc-ffff88012c001800.ost_server_uuid 50 osc.lustre-OST0000-osc-ffff88012c001800.ost_server_uuid in FULL state after 0 sec fail_loc=0x245 fail_val=8 oleg216-client.virtnet: executing wait_import_state CONNECTING osc.lustre-OST0000-osc-ffff88012c001800.ost_server_uuid 50 osc.lustre-OST0000-osc-ffff88012c001800.ost_server_uuid in CONNECTING state after 11 sec fail_loc=0 fail_val=0 Disk quotas for usr 0 (uid 0): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre/ 175588 0 0 - 4861 0 0 - oleg216-client.virtnet: executing wait_import_state IDLE osc.lustre-OST0000-osc-ffff88012c001800.ost_server_uuid 50 osc.lustre-OST0000-osc-ffff88012c001800.ost_server_uuid in IDLE state after 13 sec osc.lustre-OST0000-osc-ffff88012c001800.idle_timeout=20 osc.lustre-OST0001-osc-ffff88012c001800.idle_timeout=20 PASS 812b (32s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 812c: idle import vs lock enqueue race ==== 22:58:43 (1713495523) /mnt/lustre/f812c.sanity lmm_stripe_count: 1 lmm_stripe_size: 4194304 lmm_pattern: raid0 lmm_layout_gen: 0 lmm_stripe_offset: 0 obdidx objid objid group 0 4024 0xfb8 0x2400013a0 osc.lustre-OST0000-osc-ffff88012c001800.idle_timeout=10 osc.lustre-OST0001-osc-ffff88012c001800.idle_timeout=10 oleg216-client.virtnet: executing wait_import_state FULL osc.lustre-OST0000-osc-ffff88012c001800.ost_server_uuid 50 osc.lustre-OST0000-osc-ffff88012c001800.ost_server_uuid in FULL state after 0 sec fail_loc=0x80000533 1+0 records in 1+0 records out 512 bytes (512 B) copied, 0.509279 s, 1.0 kB/s osc.lustre-OST0000-osc-ffff88012c001800.idle_timeout=20 osc.lustre-OST0001-osc-ffff88012c001800.idle_timeout=20 PASS 812c (21s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 813: File heat verfication ================ 22:59:06 (1713495546) Turn on file heat Period second: 60, Decay percentage: 80 flags: 0 readsample: 3 writesample: 2 readbyte: 16 writebyte: 12 Sleep 63 seconds... flags: 0 readsample: 3 writesample: 2 readbyte: 16 writebyte: 12 Sleep 63 seconds... flags: 0 readsample: 3 writesample: 2 readbyte: 19 writebyte: 14 Turn off file heat for the file /mnt/lustre/f813.sanity flags: 2 readsample: 0 writesample: 0 readbyte: 0 writebyte: 0 Trun on file heat for the file /mnt/lustre/f813.sanity flags: 0 readsample: 3 writesample: 2 readbyte: 16 writebyte: 12 Turn off file heat support for the Lustre filesystem flags: 0 readsample: 0 writesample: 0 readbyte: 0 writebyte: 0 PASS 813 (130s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 814: sparse cp works as expected (LU-12361) ========================================================== 23:01:18 (1713495678) 0+0 records in 0+0 records out 0 bytes (0 B) copied, 0.00357547 s, 0.0 kB/s PASS 814 (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 815: zero byte tiny write doesn't hang (LU-12382) ========================================================== 23:01:23 (1713495683) PASS 815 (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 816: do not reset lru_resize on idle reconnect ========================================================== 23:01:29 (1713495689) osc.lustre-OST0000-osc-ffff88012c001800.idle_timeout=10 osc.lustre-OST0001-osc-ffff88012c001800.idle_timeout=10 oleg216-client.virtnet: executing wait_import_state FULL osc.lustre-OST0000-osc-ffff88012c001800.ost_server_uuid 50 osc.lustre-OST0000-osc-ffff88012c001800.ost_server_uuid in FULL state after 0 sec ldlm.namespaces.lustre-OST0000-osc-ffff88012c001800.lru_size=400 ldlm.namespaces.lustre-OST0001-osc-ffff88012c001800.lru_size=400 oleg216-client.virtnet: executing wait_import_state IDLE osc.lustre-OST0000-osc-ffff88012c001800.ost_server_uuid 50 osc.lustre-OST0000-osc-ffff88012c001800.ost_server_uuid in IDLE state after 10 sec 0+0 records in 0+0 records out 0 bytes (0 B) copied, 0.00332988 s, 0.0 kB/s osc.lustre-OST0000-osc-ffff88012c001800.idle_timeout=20 osc.lustre-OST0001-osc-ffff88012c001800.idle_timeout=20 PASS 816 (16s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 817: nfsd won't cache write lock for exec file ========================================================== 23:01:47 (1713495707) PASS 817 (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 818: unlink with failed llog ============== 23:01:53 (1713495713) lfs setstripe: setstripe error for '/mnt/lustre/d818.sanity/f818.sanity': stripe already set Stopping /mnt/lustre-mds1 (opts:) on oleg216-server fail_loc=0x80002105 Starting mds1: -o localrecov lustre-mdt1/mdt1 /mnt/lustre-mds1 oleg216-server: oleg216-server.virtnet: executing set_default_debug all all pdsh@oleg216-client: oleg216-server: ssh exited with exit code 1 Started lustre-MDT0000 [10028.155814] LustreError: 4349:0:(osp_sync.c:335:osp_sync_declare_add()) logging isn't available, run LFSCK Failing mds1 on oleg216-server Stopping /mnt/lustre-mds1 (opts:) on oleg216-server 23:02:01 (1713495721) shut down Failover mds1 to oleg216-server mount facets: mds1 Starting mds1: -o localrecov lustre-mdt1/mdt1 /mnt/lustre-mds1 oleg216-server: oleg216-server.virtnet: executing set_default_debug all all pdsh@oleg216-client: oleg216-server: ssh exited with exit code 1 Started lustre-MDT0000 23:02:15 (1713495735) targets are mounted 23:02:15 (1713495735) facet_failover done oleg216-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec PASS 818 (26s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 819a: too big niobuf in read ============== 23:02:21 (1713495741) 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0308766 s, 34.0 MB/s fail_loc=0x80000248 dd: error reading '/mnt/lustre/f819a.sanity': Value too large for defined data type 0+0 records in 0+0 records out 0 bytes (0 B) copied, 0.15473 s, 0.0 kB/s PASS 819a (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 819b: too big niobuf in write ============= 23:02:25 (1713495745) fail_loc=0x80000248 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0297038 s, 35.3 MB/s PASS 819b (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 820: update max EA from open intent ======= 23:02:31 (1713495751) SKIP: sanity test_820 needs >= 2 MDTs SKIP 820 (0s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 823: Setting create_count > OST_MAX_PRECREATE is lowered to maximum ========================================================== 23:02:33 (1713495753) setting create_count to 100200: -result- count: 9984 with max: 20000, expecting: 9984 PASS 823 (5s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 831: throttling unlink/setattr queuing on OSP ========================================================== 23:02:40 (1713495760) total: 1000 open/close in 1.89 seconds: 530.24 ops/second - unlinked 0 (time 1713495765 ; total 0 ; last 0) total: 1000 unlinks in 73 seconds: 13.698630 unlinks/second PASS 831 (81s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 832: lfs rm_entry ========================= 23:04:03 (1713495843) SKIP: sanity test_832 needs >= 2 MDTs SKIP 832 (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 833: Mixed buffered/direct read and write should not return -EIO ========================================================== 23:04:05 (1713495845) 50+0 records in 50+0 records out 52428800 bytes (52 MB) copied, 1.28952 s, 40.7 MB/s 50+0 records in 50+0 records out 52428800 bytes (52 MB) copied, 0.022896 s, 2.3 GB/s 50+0 records in 50+0 records out 52428800 bytes (52 MB) copied, 0.984565 s, 53.3 MB/s 50+0 records in 50+0 records out 52428800 bytes (52 MB) copied, 2.41052 s, 21.7 MB/s 50+0 records in 50+0 records out 52428800 bytes (52 MB) copied, 1.20745 s, 43.4 MB/s 50+0 records in 50+0 records out 52428800 bytes (52 MB) copied, 3.71341 s, 14.1 MB/s 50+0 records in 50+0 records out 52428800 bytes (52 MB) copied, 0.839096 s, 62.5 MB/s 50+0 records in 50+0 records out 52428800 bytes (52 MB) copied, 2.37142 s, 22.1 MB/s 50+0 records in 50+0 records out 52428800 bytes (52 MB) copied, 1.1251 s, 46.6 MB/s 50+0 records in 50+0 records out 52428800 bytes (52 MB) copied, 2.35804 s, 22.2 MB/s 50+0 records in 50+0 records out 52428800 bytes (52 MB) copied, 1.799 s, 29.1 MB/s 50+0 records in 50+0 records out 52428800 bytes (52 MB) copied, 3.03139 s, 17.3 MB/s 50+0 records in 50+0 records out 52428800 bytes (52 MB) copied, 3.63827 s, 14.4 MB/s 50+0 records in 50+0 records out 52428800 bytes (52 MB) copied, 5.90778 s, 8.9 MB/s 50+0 records in 50+0 records out 52428800 bytes (52 MB) copied, 4.40036 s, 11.9 MB/s 50+0 records in 50+0 records out 52428800 bytes (52 MB) copied, 1.15198 s, 45.5 MB/s 50+0 records in 50+0 records out 52428800 bytes (52 MB) copied, 1.20686 s, 43.4 MB/s 50+0 records in 50+0 records out 52428800 bytes (52 MB) copied, 1.77169 s, 29.6 MB/s 50+0 records in 50+0 records out 52428800 bytes (52 MB) copied, 3.83369 s, 13.7 MB/s 50+0 records in 50+0 records out 52428800 bytes (52 MB) copied, 5.54157 s, 9.5 MB/s 50+0 records in 50+0 records out 52428800 bytes (52 MB) copied, 3.68063 s, 14.2 MB/s 50+0 records in 50+0 records out 52428800 bytes (52 MB) copied, 3.19477 s, 16.4 MB/s 50+0 records in 50+0 records out 52428800 bytes (52 MB) copied, 2.99554 s, 17.5 MB/s 50+0 records in 50+0 records out 52428800 bytes (52 MB) copied, 2.99346 s, 17.5 MB/s 50+0 records in 50+0 records out 52428800 bytes (52 MB) copied, 0.635015 s, 82.6 MB/s 50+0 records in 50+0 records out 52428800 bytes (52 MB) copied, 1.30313 s, 40.2 MB/s 50+0 records in 50+0 records out 52428800 bytes (52 MB) copied, 2.65837 s, 19.7 MB/s 50+0 records in 50+0 records out 52428800 bytes (52 MB) copied, 5.82057 s, 9.0 MB/s 50+0 records in 50+0 records out 52428800 bytes (52 MB) copied, 8.74005 s, 6.0 MB/s 50+0 records in 50+0 records out 52428800 bytes (52 MB) copied, 5.90281 s, 8.9 MB/s PASS 833 (34s) debug_raw_pointers=0 debug_raw_pointers=0 SKIP: sanity test_842 skipping SLOW test 842 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 850: lljobstat can parse living and aggregated job_stats ========================================================== 23:04:42 (1713495882) error: list_param: param_path '*/*/job_stats': No such file or directory error: list_param: listing '*/*/job_stats': No such file or directory --- timestamp: 1713495883 top_jobs: ... error: get_param: param_path '*/*/job_stats': No such file or directory --- timestamp: 1713495883 top_jobs: ... PASS 850 (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 851: fanotify can monitor open/read/write/close events for lustre fs ========================================================== 23:04:46 (1713495886) localhost: Warning: Permanently added 'localhost' (ECDSA) to the list of known hosts. open:/mnt/lustre/d851.sanity/f_test_851_6927:22356:bash write&close:/mnt/lustre/d851.sanity/f_test_851_6927:22356:bash write&close:/mnt/lustre/d851.sanity/f_test_851_6927:22356:bash 1234567890 open:/mnt/lustre/d851.sanity/f_test_851_6927:22587:cat read:/mnt/lustre/d851.sanity/f_test_851_6927:22587:cat close:/mnt/lustre/d851.sanity/f_test_851_6927:22587: PASS 851 (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 900: umount should not race with any mgc requeue thread ========================================================== 23:04:51 (1713495891) fail_loc=0x903 cln..Failing mds1 on oleg216-server Stopping /mnt/lustre-mds1 (opts:) on oleg216-server 23:04:53 (1713495893) shut down Failover mds1 to oleg216-server mount facets: mds1 Starting mds1: -o localrecov lustre-mdt1/mdt1 /mnt/lustre-mds1 oleg216-server: oleg216-server.virtnet: executing set_default_debug all all pdsh@oleg216-client: oleg216-server: ssh exited with exit code 1 Started lustre-MDT0000 23:05:06 (1713495906) targets are mounted 23:05:06 (1713495906) facet_failover done oleg216-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec Stopping clients: oleg216-client.virtnet /mnt/lustre (opts:) Stopping client oleg216-client.virtnet /mnt/lustre opts: Stopping clients: oleg216-client.virtnet /mnt/lustre2 (opts:) Stopping /mnt/lustre-mds1 (opts:-f) on oleg216-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg216-server Stopping /mnt/lustre-ost2 (opts:-f) on oleg216-server unloading modules on: 'oleg216-server' oleg216-server: oleg216-server.virtnet: executing unload_modules_local modules unloaded. mnt..Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions ../libcfs/libcfs/libcfs options: 'cpu_npartitions=2' ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' quota/lquota options: 'hash_lqs_cur_bits=3' loading modules on: 'oleg216-server' oleg216-server: oleg216-server.virtnet: executing load_modules_local oleg216-server: Loading modules from /home/green/git/lustre-release/lustre oleg216-server: detected 4 online CPUs by sysfs oleg216-server: Force libcfs to create 2 CPU partitions oleg216-server: ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' oleg216-server: quota/lquota options: 'hash_lqs_cur_bits=3' Checking servers environments Checking clients oleg216-client.virtnet environments Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions loading modules on: 'oleg216-server' oleg216-server: oleg216-server.virtnet: executing load_modules_local oleg216-server: Loading modules from /home/green/git/lustre-release/lustre oleg216-server: detected 4 online CPUs by sysfs oleg216-server: Force libcfs to create 2 CPU partitions Setup mgs, mdt, osts Starting mds1: -o localrecov lustre-mdt1/mdt1 /mnt/lustre-mds1 oleg216-server: oleg216-server.virtnet: executing set_default_debug all all pdsh@oleg216-client: oleg216-server: ssh exited with exit code 1 Started lustre-MDT0000 Starting ost1: -o localrecov lustre-ost1/ost1 /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg216-server: oleg216-server.virtnet: executing set_default_debug all all pdsh@oleg216-client: oleg216-server: ssh exited with exit code 1 Started lustre-OST0000 Starting ost2: -o localrecov lustre-ost2/ost2 /mnt/lustre-ost2 seq.cli-lustre-OST0001-super.width=65536 oleg216-server: oleg216-server.virtnet: executing set_default_debug all all pdsh@oleg216-client: oleg216-server: ssh exited with exit code 1 Started lustre-OST0001 Starting client: oleg216-client.virtnet: -o user_xattr,flock oleg216-server@tcp:/lustre /mnt/lustre Starting client oleg216-client.virtnet: -o user_xattr,flock oleg216-server@tcp:/lustre /mnt/lustre Started clients oleg216-client.virtnet: 192.168.202.116@tcp:/lustre on /mnt/lustre type lustre (rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project) Using TIMEOUT=20 osc.lustre-OST0000-osc-ffff88012a676000.idle_timeout=debug osc.lustre-OST0001-osc-ffff88012a676000.idle_timeout=debug disable quota as required done PASS 900 (144s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 901: don't leak a mgc lock on client umount ========================================================== 23:07:17 (1713496037) 192.168.202.116@tcp:/lustre /mnt/lustre lustre rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project 0 0 Stopping client oleg216-client.virtnet /mnt/lustre (opts:) Starting client: oleg216-client.virtnet: -o user_xattr,flock oleg216-server@tcp:/lustre /mnt/lustre PASS 901 (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 902: test short write doesn't hang lustre ========================================================== 23:07:22 (1713496042) fail_loc=0x1415 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0989916 s, 10.6 MB/s PASS 902 (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 903: Test long page discard does not cause evictions ========================================================== 23:07:25 (1713496045) 6+0 records in 6+0 records out 6291456 bytes (6.3 MB) copied, 0.191998 s, 32.8 MB/s fail_loc=0x417 fail_val=20 Waiting for MDT destroys to complete Waiting 0s for local destroys to complete Waiting 1s for local destroys to complete Waiting 2s for local destroys to complete Waiting 3s for local destroys to complete Waiting 4s for local destroys to complete Waiting 5s for local destroys to complete Waiting 6s for local destroys to complete Waiting 7s for local destroys to complete Waiting 8s for local destroys to complete Waiting 9s for local destroys to complete Waiting 10s for local destroys to complete Waiting 11s for local destroys to complete Waiting 12s for local destroys to complete Waiting 13s for local destroys to complete Waiting 14s for local destroys to complete Waiting 15s for local destroys to complete Waiting 16s for local destroys to complete Waiting 17s for local destroys to complete Waiting 18s for local destroys to complete Waiting 19s for local destroys to complete Waiting 20s for local destroys to complete Waiting 21s for local destroys to complete Waiting 22s for local destroys to complete Waiting 23s for local destroys to complete Waiting 24s for local destroys to complete Waiting 25s for local destroys to complete Waiting 26s for local destroys to complete Waiting 27s for local destroys to complete Waiting 28s for local destroys to complete Waiting 29s for local destroys to complete Waiting 30s for local destroys to complete Waiting 31s for local destroys to complete Waiting 32s for local destroys to complete Waiting 33s for local destroys to complete Waiting 34s for local destroys to complete Waiting 35s for local destroys to complete Waiting 36s for local destroys to complete Waiting 37s for local destroys to complete Waiting 38s for local destroys to complete Waiting 39s for local destroys to complete Waiting 40s for local destroys to complete Waiting 41s for local destroys to complete Waiting 42s for local destroys to complete Waiting 43s for local destroys to complete Waiting 44s for local destroys to complete Waiting 45s for local destroys to complete Waiting 46s for local destroys to complete Waiting 47s for local destroys to complete Waiting 48s for local destroys to complete Waiting 49s for local destroys to complete Waiting 50s for local destroys to complete Waiting 51s for local destroys to complete Waiting 52s for local destroys to complete Waiting 53s for local destroys to complete Waiting 54s for local destroys to complete Waiting 55s for local destroys to complete Waiting 56s for local destroys to complete Waiting 57s for local destroys to complete Waiting 58s for local destroys to complete Waiting 59s for local destroys to complete Waiting 60s for local destroys to complete Waiting 61s for local destroys to complete Waiting 62s for local destroys to complete Waiting 63s for local destroys to complete Waiting 64s for local destroys to complete Waiting 65s for local destroys to complete Waiting 66s for local destroys to complete Waiting 67s for local destroys to complete Waiting 68s for local destroys to complete Waiting 69s for local destroys to complete Waiting 70s for local destroys to complete Waiting 71s for local destroys to complete Waiting 72s for local destroys to complete Waiting 73s for local destroys to complete Waiting 74s for local destroys to complete Waiting 75s for local destroys to complete Waiting 76s for local destroys to complete Waiting 77s for local destroys to complete Waiting 78s for local destroys to complete Waiting 79s for local destroys to complete Waiting 80s for local destroys to complete Waiting 81s for local destroys to complete Waiting 82s for local destroys to complete Waiting 83s for local destroys to complete Waiting 84s for local destroys to complete Waiting 85s for local destroys to complete Waiting 86s for local destroys to complete Waiting 87s for local destroys to complete Waiting 88s for local destroys to complete Waiting 89s for local destroys to complete Waiting 90s for local destroys to complete Waiting 91s for local destroys to complete Waiting 92s for local destroys to complete Waiting 93s for local destroys to complete Waiting 94s for local destroys to complete Waiting 95s for local destroys to complete Waiting 96s for local destroys to complete Waiting 97s for local destroys to complete Waiting 98s for local destroys to complete PASS 903 (138s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 904: virtual project ID xattr ============= 23:09:46 (1713496186) SKIP: sanity test_904 ldiskfs only test SKIP 904 (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 905: bad or new opcode should not stuck client ========================================================== 23:09:50 (1713496190) fail_val=21 fail_loc=0x0253 lfs ladvise: cannot give advice: Operation not supported (95) ladvise: cannot give advice 'willread' to file '/mnt/lustre/f905.sanity': Operation not supported PASS 905 (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 906: Simple test for io_uring I/O engine via fio ========================================================== 23:09:55 (1713496195) SKIP: sanity test_906 Client OS does not support io_uring I/O engine SKIP 906 (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 907: write rpc error during unlink ======== 23:09:59 (1713496199) /mnt/lustre/f907.sanity lmm_stripe_count: 2 lmm_stripe_size: 1048576 lmm_pattern: raid0 lmm_layout_gen: 0 lmm_stripe_offset: 0 obdidx objid objid group 0 14182 0x3766 0x2400013a0 1 2403 0x963 0x280000bd0 fail_val=3 fail_loc=0x80000216 17+0 records in 17+0 records out 1114112 bytes (1.1 MB) copied, 0.0636473 s, 17.5 MB/s PASS 907 (4s) debug_raw_pointers=0 debug_raw_pointers=0 == sanity test complete, duration 10428 sec ============== 23:10:04 (1713496204) === sanity: start cleanup 23:10:04 (1713496204) === === sanity: finish cleanup 23:10:52 (1713496252) === debug=super ioctl neterror warning dlmtrace error emerg ha rpctrace vfstrace config console lfsck