-----============= acceptance-small: sanity ============----- Wed Apr 17 20:55:06 EDT 2024 excepting tests: 225 255 256 400a 42a 42c 42b 118c 118d 407 411b skipping tests SLOW=no: 27m 60i 64b 68 71 135 136 230d 300o 842 === sanity: start setup 20:55:10 (1713401710) === oleg146-client.virtnet: executing check_config_client /mnt/lustre oleg146-client.virtnet: Checking config lustre mounted on /mnt/lustre Checking servers environments Checking clients oleg146-client.virtnet environments Using TIMEOUT=20 osc.lustre-OST0000-osc-ffff88012b734000.idle_timeout=debug osc.lustre-OST0001-osc-ffff88012b734000.idle_timeout=debug disable quota as required oleg146-server: oleg146-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all osd-ldiskfs.track_declares_assert=1 === sanity: finish setup 20:55:17 (1713401717) === running as uid/gid/euid/egid 500/500/500/500, groups: [true] running as uid/gid/euid/egid 500/500/500/500, groups: [touch] [/mnt/lustre/d0_runas_test/f7509] preparing for tests involving mounts mke2fs 1.46.2.wc5 (26-Mar-2022) debug=all debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 60a: llog_test run from kernel module and test llog_reader ========================================================== 20:55:19 (1713401719) SKIP: sanity test_60a missing subtest run-llog.sh SKIP 60a (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 60b: limit repeated messages from CERROR/CWARN ========================================================== 20:55:22 (1713401722) PASS 60b (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 60c: unlink file when mds full ============ 20:55:25 (1713401725) create 5000 files - open/close 3230 (time 1713401736.39 total 10.00 last 322.96) total: 5000 open/close in 17.20 seconds: 290.65 ops/second fail_loc=0x80000137 - unlinked 0 (time 1713401744 ; total 0 ; last 0) total: 5000 unlinks in 11 seconds: 454.545441 unlinks/second fail_loc=0 PASS 60c (32s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 60d: test printk console message masking == 20:55:58 (1713401758) printk=0 emerg PASS 60d (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 60e: no space while new llog is being created ========================================================== 20:56:01 (1713401761) fail_loc=0x15b PASS 60e (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 60f: change debug_path works ============== 20:56:06 (1713401766) debug_path=/tmp/f60f.sanity fail_loc=0x8000050e striped dir -i0 -c2 -H all_char /mnt/lustre/d60f.sanity ls: cannot access /tmp/f60f.sanity*: No such file or directory 0 /tmp/f60f.sanity.1713401766.13809 debug_path=/tmp/lustre-log PASS 60f (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 60g: transaction abort won't cause MDT hung ========================================================== 20:56:09 (1713401769) striped dir -i0 -c2 -H all_char /mnt/lustre/d60g.sanity /home/green/git/lustre-release/lustre/tests/test-framework.sh: line 4731: 14744 Killed ( local index=0; while true; do $LFS setdirstripe -i $(($index % $MDSCOUNT)) -c $MDSCOUNT $DIR/$tdir/subdir$index 2> /dev/null; mkdir $DIR/$tdir/subdir$index 2> /dev/null; rmdir $DIR/$tdir/subdir$index 2> /dev/null; index=$((index + 1)); done ) Started LFSCK on the device lustre-MDT0000: scrub namespace /mnt/lustre/d60g.sanity: subdir132 subdir151 subdir16 subdir171 subdir427 subdir533 subdir565 subdir591 subdir607 subdir633 subdir73 /mnt/lustre/d60g.sanity/subdir132: /mnt/lustre/d60g.sanity/subdir151: /mnt/lustre/d60g.sanity/subdir16: /mnt/lustre/d60g.sanity/subdir171: ls: closing directory /mnt/lustre/d60g.sanity/subdir171: No such file or directory /mnt/lustre/d60g.sanity/subdir427: ls: closing directory /mnt/lustre/d60g.sanity/subdir427: No such file or directory /mnt/lustre/d60g.sanity/subdir533: /mnt/lustre/d60g.sanity/subdir565: /mnt/lustre/d60g.sanity/subdir591: /mnt/lustre/d60g.sanity/subdir607: /mnt/lustre/d60g.sanity/subdir633: /mnt/lustre/d60g.sanity/subdir73: PASS 60g (30s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 60h: striped directory with missing stripes can be accessed ========================================================== 20:56:41 (1713401801) fail_loc=0x80000188 /home/green/git/lustre-release/lustre/tests/sanity.sh: line 9488: /mnt/lustre/d60h.sanity-0x80000188/2: No such device /home/green/git/lustre-release/lustre/tests/sanity.sh: line 9488: /mnt/lustre/d60h.sanity-0x80000188/3: No such device /home/green/git/lustre-release/lustre/tests/sanity.sh: line 9488: /mnt/lustre/d60h.sanity-0x80000188/4: No such device /home/green/git/lustre-release/lustre/tests/sanity.sh: line 9488: /mnt/lustre/d60h.sanity-0x80000188/7: No such device /home/green/git/lustre-release/lustre/tests/sanity.sh: line 9488: /mnt/lustre/d60h.sanity-0x80000188/8: No such device lmv_stripe_count: 2 lmv_stripe_offset: 0 lmv_hash_type: crush mdtidx FID[seq:oid:ver] 0 [0x200000400:0x14a:0x0] 0 [0:0x0:0x0] lmv_stripe_count: 3 lmv_stripe_offset: 1 lmv_hash_type: crush,migrating,fixed mdtidx FID[seq:oid:ver] 1 [0x240000400:0x147:0x0] 0 [0x200000400:0x14a:0x0] 0 [0:0x0:0x0] /mnt/lustre/d60h.sanity-0x80000188 ~ ~ fail_loc=0x80000189 /home/green/git/lustre-release/lustre/tests/sanity.sh: line 9488: /mnt/lustre/d60h.sanity-0x80000189/2: No such device /home/green/git/lustre-release/lustre/tests/sanity.sh: line 9488: /mnt/lustre/d60h.sanity-0x80000189/3: No such device /home/green/git/lustre-release/lustre/tests/sanity.sh: line 9488: /mnt/lustre/d60h.sanity-0x80000189/4: No such device /home/green/git/lustre-release/lustre/tests/sanity.sh: line 9488: /mnt/lustre/d60h.sanity-0x80000189/7: No such device /home/green/git/lustre-release/lustre/tests/sanity.sh: line 9488: /mnt/lustre/d60h.sanity-0x80000189/8: No such device lmv_stripe_count: 2 lmv_stripe_offset: 0 lmv_hash_type: crush mdtidx FID[seq:oid:ver] 0 [0x200000400:0x14b:0x0] 0 [0:0x0:0x0] lmv_stripe_count: 3 lmv_stripe_offset: 1 lmv_hash_type: crush,migrating,fixed mdtidx FID[seq:oid:ver] 1 [0x240000400:0x148:0x0] 0 [0x200000400:0x14b:0x0] 0 [0:0x0:0x0] /mnt/lustre/d60h.sanity-0x80000189 ~ ~ PASS 60h (3s) debug_raw_pointers=0 debug_raw_pointers=0 SKIP: sanity test_60i skipping SLOW test 60i debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 60j: llog_reader reports corruptions ====== 20:56:47 (1713401807) mdd.lustre-MDT0000.changelog_mask=+hsm mdd.lustre-MDT0001.changelog_mask=+hsm Registered 2 changelog users: 'cl1 cl1' mdd.lustre-MDT0000.changelog_mask=ALL mdd.lustre-MDT0001.changelog_mask=ALL lustre-MDT0000: clear the changelog for cl1 of all records lustre-MDT0001: clear the changelog for cl1 of all records total: 100 open/close in 0.54 seconds: 185.51 ops/second - unlinked 0 (time 1713401811 ; total 0 ; last 0) total: 100 unlinks in 0 seconds: inf unlinks/second oleg146-server: debugfs 1.46.2.wc5 (26-Mar-2022) oleg146-server: /dev/mapper/mds1_flakey: catastrophic mode - not reading inode or group bitmaps SKIP: sanity test_60j path oi.1/0x1:0xc:0x0 is not in 'O/1/d/' format lustre-MDT0001: clear the changelog for cl1 of all records lustre-MDT0001: Deregistered changelog user #1 lustre-MDT0000: clear the changelog for cl1 of all records lustre-MDT0000: Deregistered changelog user #1 SKIP 60j (8s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 61a: mmap() writes don't make sync hang ========================================================================== 20:56:57 (1713401817) 1+0 records in 1+0 records out 4096 bytes (4.1 kB) copied, 0.00619813 s, 661 kB/s PASS 61a (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 61b: mmap() of unstriped file is successful ========================================================== 20:57:02 (1713401822) PASS 61b (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 63a: Verify oig_wait interruption does not crash ================================================================= 20:57:05 (1713401825) /home/green/git/lustre-release/lustre/tests/sanity.sh: line 9622: 21573 Terminated dd if=/dev/zero of=$DIR/f63 bs=8k /home/green/git/lustre-release/lustre/tests/sanity.sh: line 9622: 21582 Terminated dd if=/dev/zero of=$DIR/f63 bs=8k /home/green/git/lustre-release/lustre/tests/sanity.sh: line 9622: 21590 Terminated dd if=/dev/zero of=$DIR/f63 bs=8k /home/green/git/lustre-release/lustre/tests/sanity.sh: line 9622: 21598 Terminated dd if=/dev/zero of=$DIR/f63 bs=8k /home/green/git/lustre-release/lustre/tests/sanity.sh: line 9622: 21605 Terminated dd if=/dev/zero of=$DIR/f63 bs=8k /home/green/git/lustre-release/lustre/tests/sanity.sh: line 9622: 21613 Terminated dd if=/dev/zero of=$DIR/f63 bs=8k /home/green/git/lustre-release/lustre/tests/sanity.sh: line 9622: 21621 Terminated dd if=/dev/zero of=$DIR/f63 bs=8k /home/green/git/lustre-release/lustre/tests/sanity.sh: line 9622: 21629 Terminated dd if=/dev/zero of=$DIR/f63 bs=8k /home/green/git/lustre-release/lustre/tests/sanity.sh: line 9622: 21637 Terminated dd if=/dev/zero of=$DIR/f63 bs=8k /home/green/git/lustre-release/lustre/tests/sanity.sh: line 9622: 21644 Terminated dd if=/dev/zero of=$DIR/f63 bs=8k checking grant......UUID 1K-blocks Used Available Use% Mounted on lustre-MDT0000_UUID 1414116 4612 1283076 1% /mnt/lustre[MDT:0] lustre-MDT0001_UUID 1414116 3720 1283968 1% /mnt/lustre[MDT:1] lustre-OST0000_UUID 3833116 58872 3547924 2% /mnt/lustre[OST:0] lustre-OST0001_UUID 3833116 1564 3605456 1% /mnt/lustre[OST:1] filesystem_summary: 7666232 60436 7153380 1% /mnt/lustre pass grant check: client:574197760 server:574197760 PASS 63a (63s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 63b: async write errors should be returned to fsync ============================================================= 20:58:10 (1713401890) debug=-1 1+0 records in 1+0 records out 4096 bytes (4.1 kB) copied, 0.00643648 s, 636 kB/s fail_loc=0x80000406 fsync: Input/output error debug=trace inode super iotrace malloc cache info ioctl neterror net warning buffs other dentry nettrace page dlmtrace error emerg ha rpctrace vfstrace reada mmap config console quota sec lfsck hsm snapshot layout debug=super ioctl neterror warning dlmtrace error emerg ha rpctrace vfstrace config console lfsck checking grant......UUID 1K-blocks Used Available Use% Mounted on lustre-MDT0000_UUID 1414116 4612 1283076 1% /mnt/lustre[MDT:0] lustre-MDT0001_UUID 1414116 3720 1283968 1% /mnt/lustre[MDT:1] lustre-OST0000_UUID 3833116 1528 3605268 1% /mnt/lustre[OST:0] lustre-OST0001_UUID 3833116 1564 3605456 1% /mnt/lustre[OST:1] filesystem_summary: 7666232 3092 7210724 1% /mnt/lustre pass grant check: client:574197760 server:574197760 PASS 63b (6s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 64a: verify filter grant calculations (in kernel) =============================================================== 20:58:18 (1713401898) UUID 1K-blocks Used Available Use% Mounted on lustre-MDT0000_UUID 1414116 4612 1283076 1% /mnt/lustre[MDT:0] lustre-MDT0001_UUID 1414116 3720 1283968 1% /mnt/lustre[MDT:1] lustre-OST0000_UUID 3833116 1528 3605268 1% /mnt/lustre[OST:0] lustre-OST0001_UUID 3833116 1564 3605456 1% /mnt/lustre[OST:1] filesystem_summary: 7666232 3092 7210724 1% /mnt/lustre osc.lustre-OST0000-osc-ffff88012b734000.cur_lost_grant_bytes=3760128 osc.lustre-OST0001-osc-ffff88012b734000.cur_lost_grant_bytes=28672 osc.lustre-OST0000-osc-ffff88012b734000.cur_grant_bytes=486350848 osc.lustre-OST0001-osc-ffff88012b734000.cur_grant_bytes=84058112 checking grant......UUID 1K-blocks Used Available Use% Mounted on lustre-MDT0000_UUID 1414116 4612 1283076 1% /mnt/lustre[MDT:0] lustre-MDT0001_UUID 1414116 3720 1283968 1% /mnt/lustre[MDT:1] lustre-OST0000_UUID 3833116 1528 3605268 1% /mnt/lustre[OST:0] lustre-OST0001_UUID 3833116 1564 3605456 1% /mnt/lustre[OST:1] filesystem_summary: 7666232 3092 7210724 1% /mnt/lustre pass grant check: client:574197760 server:574197760 PASS 64a (3s) debug_raw_pointers=0 debug_raw_pointers=0 SKIP: sanity test_64b skipping SLOW test 64b debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 64c: verify grant shrink ================== 20:58:23 (1713401903) osc.lustre-OST0000-osc-ffff88012b734000.cur_grant_bytes=0 checking grant......UUID 1K-blocks Used Available Use% Mounted on lustre-MDT0000_UUID 1414116 4612 1283076 1% /mnt/lustre[MDT:0] lustre-MDT0001_UUID 1414116 3720 1283968 1% /mnt/lustre[MDT:1] lustre-OST0000_UUID 3833116 1528 3605492 1% /mnt/lustre[OST:0] lustre-OST0001_UUID 3833116 1564 3605456 1% /mnt/lustre[OST:1] filesystem_summary: 7666232 3092 7210948 1% /mnt/lustre pass grant check: client:570437632 server:570437632 PASS 64c (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 64d: check grant limit exceed ============= 20:58:28 (1713401908) 1000+0 records in 1000+0 records out 1048576000 bytes (1.0 GB) copied, 5.54966 s, 189 MB/s /home/green/git/lustre-release/lustre/tests/sanity.sh: line 9751: kill: (25213) - No such process checking grant......UUID 1K-blocks Used Available Use% Mounted on lustre-MDT0000_UUID 1414116 4612 1283076 1% /mnt/lustre[MDT:0] lustre-MDT0001_UUID 1414116 3720 1283968 1% /mnt/lustre[MDT:1] lustre-OST0000_UUID 3833116 943612 2308040 30% /mnt/lustre[OST:0] lustre-OST0001_UUID 3833116 1564 3605456 1% /mnt/lustre[OST:1] filesystem_summary: 7666232 945176 5913496 14% /mnt/lustre pass grant check: client:573968384 server:573968384 Waiting for MDT destroys to complete PASS 64d (18s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 64e: check grant consumption (no grant allocation) ========================================================== 20:58:48 (1713401928) debug=+cache Stopping client oleg146-client.virtnet /mnt/lustre (opts:) Starting client: oleg146-client.virtnet: -o user_xattr,flock oleg146-server@tcp:/lustre /mnt/lustre fail_loc=0x725 1+0 records in 1+0 records out 2502656 bytes (2.5 MB) copied, 0.0998052 s, 25.1 MB/s fail_loc=0 Stopping client oleg146-client.virtnet /mnt/lustre (opts:) Starting client: oleg146-client.virtnet: -o user_xattr,flock oleg146-server@tcp:/lustre /mnt/lustre fail_loc=0x725 fail_loc=0 PASS 64e (5s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 64f: check grant consumption (with grant allocation) ========================================================== 20:58:56 (1713401936) debug=+cache Stopping client oleg146-client.virtnet /mnt/lustre (opts:) Starting client: oleg146-client.virtnet: -o user_xattr,flock oleg146-server@tcp:/lustre /mnt/lustre 1+0 records in 1+0 records out 2732032 bytes (2.7 MB) copied, 0.101243 s, 27.0 MB/s Stopping client oleg146-client.virtnet /mnt/lustre (opts:) Starting client: oleg146-client.virtnet: -o user_xattr,flock oleg146-server@tcp:/lustre /mnt/lustre fail_loc=0x50a fail_val=3 1+0 records in 1+0 records out 2732032 bytes (2.7 MB) copied, 0.092856 s, 29.4 MB/s fail_loc=0 fail_val=0 PASS 64f (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 64g: grant shrink on MDT ================== 20:59:01 (1713401941) 1+0 records in 1+0 records out 131072 bytes (131 kB) copied, 0.00969395 s, 13.5 MB/s 1+0 records in 1+0 records out 131072 bytes (131 kB) copied, 0.0124231 s, 10.6 MB/s 1+0 records in 1+0 records out 131072 bytes (131 kB) copied, 0.0100973 s, 13.0 MB/s 1+0 records in 1+0 records out 131072 bytes (131 kB) copied, 0.00947799 s, 13.8 MB/s 1+0 records in 1+0 records out 131072 bytes (131 kB) copied, 0.00988633 s, 13.3 MB/s 1+0 records in 1+0 records out 131072 bytes (131 kB) copied, 0.00978013 s, 13.4 MB/s 1+0 records in 1+0 records out 131072 bytes (131 kB) copied, 0.00995027 s, 13.2 MB/s 1+0 records in 1+0 records out 131072 bytes (131 kB) copied, 0.0065915 s, 19.9 MB/s 1+0 records in 1+0 records out 131072 bytes (131 kB) copied, 0.0075836 s, 17.3 MB/s 1+0 records in 1+0 records out 131072 bytes (131 kB) copied, 0.00930368 s, 14.1 MB/s 0 grants, 140 pages 0 grants, 8 pages 0 grants, 8 pages 0 grants, 8 pages mdc.lustre-MDT0000-mdc-ffff8800803dd000.grant_shrink_interval=5 mdc.lustre-MDT0001-mdc-ffff8800803dd000.grant_shrink_interval=5 0 grants, 8 pages 0 grants, 8 pages 0 grants, 8 pages 0 grants, 8 pages 0 grants, 8 pages 0 grants, 8 pages 0 grants, 8 pages 0 grants, 8 pages 0 grants, 8 pages 0 grants, 8 pages 0 grants, 8 pages 0 grants, 8 pages 0 grants, 8 pages 0 grants, 8 pages 0 grants, 8 pages 0 grants, 8 pages 0 grants, 8 pages 0 grants, 8 pages 0 grants, 8 pages 0 grants, 8 pages 0 grants, 8 pages 0 grants, 0 pages mdc.lustre-MDT0000-mdc-ffff8800803dd000.grant_shrink_interval=1200 mdc.lustre-MDT0001-mdc-ffff8800803dd000.grant_shrink_interval=1200 PASS 64g (39s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 64h: grant shrink on read ================= 20:59:43 (1713401983) osc.lustre-OST0000-osc-ffff8800803dd000.grant_shrink=1 osc.lustre-OST0000-osc-ffff8800803dd000.grant_shrink_interval=10 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 0.856724 s, 12.2 MB/s 1+0 records in 1+0 records out 4096 bytes (4.1 kB) copied, 0.00982026 s, 417 kB/s PASS 64h (12s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 64i: shrink on reconnect ================== 20:59:56 (1713401996) 64+0 records in 64+0 records out 67108864 bytes (67 MB) copied, 1.62957 s, 41.2 MB/s fail_loc=0x80000513 fail_val=17 osc.lustre-OST0000-osc-ffff8800803dd000.cur_grant_bytes=73414656B Failing ost1 on oleg146-server Stopping /mnt/lustre-ost1 (opts:) on oleg146-server 21:00:00 (1713402000) shut down Failover ost1 to oleg146-server mount facets: ost1 Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg146-server: oleg146-server.virtnet: executing set_default_debug all all pdsh@oleg146-client: oleg146-server: ssh exited with exit code 1 Started lustre-OST0000 21:00:14 (1713402014) targets are mounted 21:00:14 (1713402014) facet_failover done oleg146-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 0 sec 8+0 records in 8+0 records out 8388608 bytes (8.4 MB) copied, 0.230617 s, 36.4 MB/s PASS 64i (24s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 65a: directory with no stripe info ======== 21:00:22 (1713402022) striped dir -i1 -c2 -H crush /mnt/lustre/d65a.sanity default stripe 1, ost count 2 PASS 65a (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 65b: directory setstripe -S stripe_size*2 -i 0 -c 1 ========================================================== 21:00:26 (1713402026) striped dir -i1 -c2 -H fnv_1a_64 /mnt/lustre/d65b.sanity dir stripe 1, default stripe 1, ost count 2 PASS 65b (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 65c: directory setstripe -S stripe_size*4 -i 1 -c 1 ========================================================== 21:00:30 (1713402030) striped dir -i1 -c2 -H crush /mnt/lustre/d65c.sanity dir stripe 1, default stripe 1, ost count 2 PASS 65c (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 65d: directory setstripe -S stripe_size -c stripe_count ========================================================== 21:00:35 (1713402035) striped dir -i1 -c2 -H fnv_1a_64 /mnt/lustre/d65d.sanity dir stripe 0, default stripe 1, ost count 2 PASS 65d (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 65e: directory setstripe defaults ========= 21:00:38 (1713402038) striped dir -i1 -c2 -H all_char /mnt/lustre/d65e.sanity (Default) /mnt/lustre/d65e.sanity default stripe 1, ost count 2 PASS 65e (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 65f: dir setstripe permission (should return error) ============================================================= 21:00:42 (1713402042) striped dir -i1 -c2 -H crush2 /mnt/lustre/d65f.sanityf running as uid/gid/euid/egid 500/500/500/500, groups: [/home/green/git/lustre-release/lustre/utils/lfs] [setstripe] [/mnt/lustre/d65f.sanityf] lfs setstripe: setstripe error for '/mnt/lustre/d65f.sanityf': Operation not permitted PASS 65f (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 65g: directory setstripe -d =============== 21:00:46 (1713402046) striped dir -i1 -c2 -H fnv_1a_64 /mnt/lustre/d65g.sanity (Default) /mnt/lustre/d65g.sanity PASS 65g (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 65h: directory stripe info inherit ============================================================================== 21:00:50 (1713402050) striped dir -i1 -c2 -H crush2 /mnt/lustre/d65h.sanity striped dir -i1 -c2 -H all_char /mnt/lustre/d65h.sanity/dd1 PASS 65h (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 65i: various tests to set root directory striping ========================================================== 21:00:53 (1713402053) /mnt/lustre stripe_count: -1 stripe_size: 65536 pattern: raid0 stripe_offset: -1 /mnt/lustre/d60j.sanity stripe_count: -1 stripe_size: 65536 pattern: raid0 stripe_offset: -1 /mnt/lustre/d65e.sanity stripe_count: -1 stripe_size: 65536 pattern: raid0 stripe_offset: -1 /mnt/lustre/f61b.sanity has no stripe info /mnt/lustre/d60f.sanity stripe_count: -1 stripe_size: 65536 pattern: raid0 stripe_offset: -1 /mnt/lustre/f60b.sanity lmm_stripe_count: 1 lmm_stripe_size: 4194304 lmm_pattern: raid0 lmm_layout_gen: 0 lmm_stripe_offset: 1 obdidx objid objid group 1 2 0x2 0x2c0000401 /mnt/lustre/d65b.sanity stripe_count: 1 stripe_size: 8388608 pattern: raid0 stripe_offset: 0 /mnt/lustre/d65c.sanity stripe_count: 1 stripe_size: 16777216 pattern: raid0 stripe_offset: 1 /mnt/lustre/f63b.sanity lmm_stripe_count: 1 lmm_stripe_size: 4194304 lmm_pattern: raid0 lmm_layout_gen: 0 lmm_stripe_offset: 0 obdidx objid objid group 0 2561 0xa01 0x280000401 /mnt/lustre/d65a.sanity stripe_count: -1 stripe_size: 65536 pattern: raid0 stripe_offset: -1 /mnt/lustre/d65h.sanity stripe_count: 1 stripe_size: 8388608 pattern: raid0 stripe_offset: 0 /mnt/lustre/d65f.sanityf stripe_count: -1 stripe_size: 65536 pattern: raid0 stripe_offset: -1 /mnt/lustre/f64f.sanity lmm_stripe_count: 1 lmm_stripe_size: 4194304 lmm_pattern: raid0 lmm_layout_gen: 0 lmm_stripe_offset: 0 obdidx objid objid group 0 2566 0xa06 0x280000401 /mnt/lustre/d65g.sanity stripe_count: -1 stripe_size: 65536 pattern: raid0 stripe_offset: -1 /mnt/lustre/f61 lmm_stripe_count: 1 lmm_stripe_size: 4194304 lmm_pattern: raid0 lmm_layout_gen: 0 lmm_stripe_offset: 1 obdidx objid objid group 1 2560 0xa00 0x2c0000401 /mnt/lustre/d65d.sanity stripe_count: 1 stripe_size: 4194304 pattern: raid0 stripe_offset: -1 /mnt/lustre lmm_fid: [0x200000007:0x1:0x0] stripe_count: -1 stripe_size: 65536 pattern: raid0 stripe_offset: -1 /mnt/lustre/d60j.sanity lmm_fid: [0x200000402:0x1626:0x0] stripe_count: -1 stripe_size: 65536 pattern: raid0 stripe_offset: -1 /mnt/lustre/d65e.sanity lmm_fid: [0x240000404:0x6:0x0] stripe_count: -1 stripe_size: 65536 pattern: raid0 stripe_offset: -1 /mnt/lustre/f61b.sanity has no stripe info /mnt/lustre/d60f.sanity lmm_fid: [0x200000402:0x138d:0x0] stripe_count: -1 stripe_size: 65536 pattern: raid0 stripe_offset: -1 /mnt/lustre/f60b.sanity lmm_magic: 0x0BD10BD0 lmm_seq: 0x200000402 lmm_object_id: 0x3 lmm_fid: [0x200000402:0x3:0x0] lmm_stripe_count: 1 lmm_stripe_size: 4194304 lmm_pattern: raid0 lmm_layout_gen: 0 lmm_stripe_offset: 1 obdidx objid objid group 1 2 0x2 0x2c0000401 /mnt/lustre/d65b.sanity lmm_fid: [0x240000404:0x2:0x0] stripe_count: 1 stripe_size: 8388608 pattern: raid0 stripe_offset: 0 /mnt/lustre/d65c.sanity lmm_fid: [0x240000404:0x3:0x0] stripe_count: 1 stripe_size: 16777216 pattern: raid0 stripe_offset: 1 /mnt/lustre/f63b.sanity lmm_magic: 0x0BD10BD0 lmm_seq: 0x200000402 lmm_object_id: 0x1697 lmm_fid: [0x200000402:0x1697:0x0] lmm_stripe_count: 1 lmm_stripe_size: 4194304 lmm_pattern: raid0 lmm_layout_gen: 0 lmm_stripe_offset: 0 obdidx objid objid group 0 2561 0xa01 0x280000401 /mnt/lustre/d65a.sanity lmm_fid: [0x240000404:0x1:0x0] stripe_count: -1 stripe_size: 65536 pattern: raid0 stripe_offset: -1 /mnt/lustre/d65h.sanity lmm_fid: [0x240000404:0xa:0x0] stripe_count: 1 stripe_size: 8388608 pattern: raid0 stripe_offset: 0 /mnt/lustre/d65f.sanityf lmm_fid: [0x240000404:0x8:0x0] stripe_count: -1 stripe_size: 65536 pattern: raid0 stripe_offset: -1 /mnt/lustre/f64f.sanity lmm_magic: 0x0BD10BD0 lmm_seq: 0x200000407 lmm_object_id: 0x1 lmm_fid: [0x200000407:0x1:0x0] lmm_stripe_count: 1 lmm_stripe_size: 4194304 lmm_pattern: raid0 lmm_layout_gen: 0 lmm_stripe_offset: 0 obdidx objid objid group 0 2566 0xa06 0x280000401 /mnt/lustre/d65g.sanity lmm_fid: [0x240000404:0x9:0x0] stripe_count: -1 stripe_size: 65536 pattern: raid0 stripe_offset: -1 /mnt/lustre/f61 lmm_magic: 0x0BD10BD0 lmm_seq: 0x200000402 lmm_object_id: 0x168b lmm_fid: [0x200000402:0x168b:0x0] lmm_stripe_count: 1 lmm_stripe_size: 4194304 lmm_pattern: raid0 lmm_layout_gen: 0 lmm_stripe_offset: 1 obdidx objid objid group 1 2560 0xa00 0x2c0000401 /mnt/lustre/d65d.sanity lmm_fid: [0x240000404:0x4:0x0] stripe_count: 1 stripe_size: 4194304 pattern: raid0 stripe_offset: -1 PASS 65i (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 65j: set default striping on root directory (bug 6367)=========================================================== 21:00:58 (1713402058) PASS 65j (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 65k: validate manual striping works properly with deactivated OSCs ========================================================== 21:01:02 (1713402062) Check OST status: lustre-OST0000-osc-MDT0001 is active lustre-OST0000-osc-MDT0000 is active lustre-OST0001-osc-MDT0001 is active lustre-OST0001-osc-MDT0000 is active total: 1000 open/close in 2.11 seconds: 474.63 ops/second Deactivate: lustre-OST0000-osc-MDT0001 /home/green/git/lustre-release/lustre/utils/lfs setstripe -i 0 -c 1 /mnt/lustre/d65k.sanity/0 /home/green/git/lustre-release/lustre/utils/lfs setstripe -i 1 -c 1 /mnt/lustre/d65k.sanity/1 - unlinked 0 (time 1713402069 ; total 0 ; last 0) total: 1000 unlinks in 1 seconds: 1000.000000 unlinks/second lustre-OST0000-osc-MDT0001 is Activate oleg146-server: oleg146-server.virtnet: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid 50 oleg146-server: os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid in FULL state after 0 sec oleg146-server: oleg146-server.virtnet: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid 50 oleg146-server: os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid in FULL state after 0 sec total: 1000 open/close in 2.13 seconds: 470.22 ops/second Deactivate: lustre-OST0000-osc-MDT0000 /home/green/git/lustre-release/lustre/utils/lfs setstripe -i 0 -c 1 /mnt/lustre/d65k.sanity/0 /home/green/git/lustre-release/lustre/utils/lfs setstripe -i 1 -c 1 /mnt/lustre/d65k.sanity/1 - unlinked 0 (time 1713402078 ; total 0 ; last 0) total: 1000 unlinks in 1 seconds: 1000.000000 unlinks/second lustre-OST0000-osc-MDT0000 is Activate oleg146-server: oleg146-server.virtnet: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid 50 oleg146-server: os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid in FULL state after 0 sec oleg146-server: oleg146-server.virtnet: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid 50 oleg146-server: os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid in FULL state after 0 sec total: 1000 open/close in 2.23 seconds: 447.79 ops/second Deactivate: lustre-OST0001-osc-MDT0001 /home/green/git/lustre-release/lustre/utils/lfs setstripe -i 0 -c 1 /mnt/lustre/d65k.sanity/0 /home/green/git/lustre-release/lustre/utils/lfs setstripe -i 1 -c 1 /mnt/lustre/d65k.sanity/1 - unlinked 0 (time 1713402087 ; total 0 ; last 0) total: 1000 unlinks in 1 seconds: 1000.000000 unlinks/second lustre-OST0001-osc-MDT0001 is Activate oleg146-server: oleg146-server.virtnet: executing wait_import_state FULL os[cp].lustre-OST0001-osc-MDT0000.ost_server_uuid 50 oleg146-server: os[cp].lustre-OST0001-osc-MDT0000.ost_server_uuid in FULL state after 0 sec oleg146-server: oleg146-server.virtnet: executing wait_import_state FULL os[cp].lustre-OST0001-osc-MDT0001.ost_server_uuid 50 oleg146-server: os[cp].lustre-OST0001-osc-MDT0001.ost_server_uuid in FULL state after 0 sec total: 1000 open/close in 2.14 seconds: 467.03 ops/second Deactivate: lustre-OST0001-osc-MDT0000 /home/green/git/lustre-release/lustre/utils/lfs setstripe -i 0 -c 1 /mnt/lustre/d65k.sanity/0 /home/green/git/lustre-release/lustre/utils/lfs setstripe -i 1 -c 1 /mnt/lustre/d65k.sanity/1 - unlinked 0 (time 1713402096 ; total 0 ; last 0) total: 1000 unlinks in 1 seconds: 1000.000000 unlinks/second lustre-OST0001-osc-MDT0000 is Activate oleg146-server: oleg146-server.virtnet: executing wait_import_state FULL os[cp].lustre-OST0001-osc-MDT0000.ost_server_uuid 50 oleg146-server: os[cp].lustre-OST0001-osc-MDT0000.ost_server_uuid in FULL state after 0 sec oleg146-server: oleg146-server.virtnet: executing wait_import_state FULL os[cp].lustre-OST0001-osc-MDT0001.ost_server_uuid 50 oleg146-server: os[cp].lustre-OST0001-osc-MDT0001.ost_server_uuid in FULL state after 0 sec PASS 65k (40s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 65l: lfs find on -1 stripe dir ================================================================================== 21:01:43 (1713402103) striped dir -i1 -c2 -H all_char /mnt/lustre/d65l.sanity/test_dir PASS 65l (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 65m: normal user can't set filesystem default stripe ========================================================== 21:01:47 (1713402107) running as uid/gid/euid/egid 500/500/500/500, groups: [/home/green/git/lustre-release/lustre/utils/lfs] [setstripe] [-c] [2] [/mnt/lustre] lfs setstripe: setstripe error for '/mnt/lustre': Operation not permitted PASS 65m (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 65n: don't inherit default layout from root for new subdirectories ========================================================== 21:01:50 (1713402110) Creating new pool oleg146-server: Pool lustre.test_65n created Adding targets to pool oleg146-server: OST lustre-OST0000_UUID added to pool lustre.test_65n oleg146-server: OST lustre-OST0001_UUID added to pool lustre.test_65n /home/green/git/lustre-release/lustre/utils/lfs getstripe -d /mnt/lustre/d65n.sanity-4 stripe_count: 1 stripe_size: 4194304 pattern: raid0 stripe_offset: -1 pool: test_65n /home/green/git/lustre-release/lustre/utils/lfs getstripe -d /mnt/lustre stripe_count: 1 stripe_size: 4194304 pattern: raid0 stripe_offset: -1 pool: test_65n Destroy the created pools: test_65n lustre.test_65n oleg146-server: OST lustre-OST0000_UUID removed from pool lustre.test_65n oleg146-server: OST lustre-OST0001_UUID removed from pool lustre.test_65n oleg146-server: Pool lustre.test_65n destroyed PASS 65n (14s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 65o: pool inheritance for mdt component === 21:02:06 (1713402126) Creating new pool oleg146-server: Pool lustre.test_65o created Adding targets to pool oleg146-server: OST lustre-OST0000_UUID added to pool lustre.test_65o oleg146-server: OST lustre-OST0001_UUID added to pool lustre.test_65o /mnt/lustre/d65o.sanity lcm_layout_gen: 0 lcm_mirror_count: 1 lcm_entry_count: 2 lcme_id: N/A lcme_mirror_id: N/A lcme_flags: 0 lcme_extent.e_start: 0 lcme_extent.e_end: 1048576 stripe_count: 0 stripe_size: 1048576 pattern: mdt stripe_offset: -1 lcme_id: N/A lcme_mirror_id: N/A lcme_flags: 0 lcme_extent.e_start: 1048576 lcme_extent.e_end: EOF stripe_count: 1 stripe_size: 1048576 pattern: raid0 stripe_offset: -1 pool: test_65o /mnt/lustre/d65o.sanity/dir2 lcm_layout_gen: 0 lcm_mirror_count: 1 lcm_entry_count: 2 lcme_id: N/A lcme_mirror_id: N/A lcme_flags: 0 lcme_extent.e_start: 0 lcme_extent.e_end: 1048576 stripe_count: 0 stripe_size: 1048576 pattern: mdt stripe_offset: -1 lcme_id: N/A lcme_mirror_id: N/A lcme_flags: 0 lcme_extent.e_start: 1048576 lcme_extent.e_end: EOF stripe_count: 1 stripe_size: 1048576 pattern: raid0 stripe_offset: -1 pool: test_65o lcm_layout_gen: 0 lcm_mirror_count: 1 lcm_entry_count: 1 lcme_id: N/A lcme_mirror_id: N/A lcme_flags: 0 lcme_extent.e_start: 0 lcme_extent.e_end: EOF stripe_count: 1 stripe_size: 4194304 pattern: raid0 stripe_offset: -1 pool: test_65o Destroy the created pools: test_65o lustre.test_65o oleg146-server: OST lustre-OST0000_UUID removed from pool lustre.test_65o oleg146-server: OST lustre-OST0001_UUID removed from pool lustre.test_65o oleg146-server: Pool lustre.test_65o destroyed PASS 65o (14s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 65p: setstripe with yaml file and huge number ========================================================== 21:02:22 (1713402142) striped dir -i1 -c2 -H crush2 /mnt/lustre/d65p.sanity/src_dir striped dir -i1 -c2 -H all_char /mnt/lustre/d65p.sanity/dst_dir PASS 65p (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 65q: setstripe with >=8E offset should fail ========================================================== 21:02:26 (1713402146) striped dir -i1 -c2 -H crush /mnt/lustre/d65q.sanity/src_dir lfs setstripe: cannot set default composite layout for '/mnt/lustre/d65q.sanity/src_dir': Invalid argument PASS 65q (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 66: update inode blocks count on client ========================================================================= 21:02:30 (1713402150) 8+0 records in 8+0 records out 8192 bytes (8.2 kB) copied, 0.00806323 s, 1.0 MB/s PASS 66 (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 69: verify oa2dentry return -ENOENT doesn't LBUG ================================================================ 21:02:34 (1713402154) directio on /mnt/lustre/f69.sanity.2 for 1x4194304 bytes PASS fail_loc=0x217 directio on /mnt/lustre/f69.sanity for 2x4194304 bytes Write error No such file or directory (rc = -1, len = 8388608) fail_loc=0 directio on /mnt/lustre/f69.sanity for 2x4194304 bytes PASS directio on /mnt/lustre/f69.sanity for 1x4194304 bytes PASS fail_loc=0x217 directio on /mnt/lustre/f69.sanity for 1x4194304 bytes Read error: No such file or directory rc = -1 fail_loc=0 PASS 69 (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 70a: verify health_check, health_write don't explode (on OST) ========================================================== 21:02:39 (1713402159) enable_health_write=off enable_health_write=0 enable_health_write=on enable_health_write=1 enable_health_write=0 PASS 70a (4s) debug_raw_pointers=0 debug_raw_pointers=0 SKIP: sanity test_71 skipping SLOW test 71 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 72a: Test that remove suid works properly (bug5695) ============================================================== 21:02:45 (1713402165) running as uid/gid/euid/egid 500/500/500/500, groups: [true] running as uid/gid/euid/egid 500/500/500/500, groups: [touch] [/mnt/lustre/d0_runas_test/f7509] running as uid/gid/euid/egid 500/500/500/500, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/f72a.sanity] [bs=512] [count=1] 1+0 records in 1+0 records out 512 bytes (512 B) copied, 0.00350557 s, 146 kB/s PASS 72a (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 72b: Test that we keep mode setting if without file data changed (bug 24226) ========================================================== 21:02:48 (1713402168) running as uid/gid/euid/egid 500/500/500/500, groups: [true] running as uid/gid/euid/egid 500/500/500/500, groups: [touch] [/mnt/lustre/d0_runas_test/f7509] striped dir -i0 -c2 -H all_char /mnt/lustre/f72b.sanity-dg striped dir -i0 -c2 -H fnv_1a_64 /mnt/lustre/f72b.sanity-du running as uid/gid/euid/egid 500/500/500/500, groups: [chmod] [777] [/mnt/lustre/f72b.sanity-fg] chmod: changing permissions of '/mnt/lustre/f72b.sanity-fg': Operation not permitted running as uid/gid/euid/egid 500/500/500/500, groups: [chmod] [777] [/mnt/lustre/f72b.sanity-fu] chmod: changing permissions of '/mnt/lustre/f72b.sanity-fu': Operation not permitted running as uid/gid/euid/egid 500/500/500/500, groups: [chmod] [777] [/mnt/lustre/f72b.sanity-dg] chmod: changing permissions of '/mnt/lustre/f72b.sanity-dg': Operation not permitted running as uid/gid/euid/egid 500/500/500/500, groups: [chmod] [777] [/mnt/lustre/f72b.sanity-du] chmod: changing permissions of '/mnt/lustre/f72b.sanity-du': Operation not permitted running as uid/gid/euid/egid 500/500/500/500, groups: [chmod] [2777] [/mnt/lustre/f72b.sanity-fg] chmod: changing permissions of '/mnt/lustre/f72b.sanity-fg': Operation not permitted running as uid/gid/euid/egid 500/500/500/500, groups: [chmod] [2777] [/mnt/lustre/f72b.sanity-fu] chmod: changing permissions of '/mnt/lustre/f72b.sanity-fu': Operation not permitted running as uid/gid/euid/egid 500/500/500/500, groups: [chmod] [2777] [/mnt/lustre/f72b.sanity-dg] chmod: changing permissions of '/mnt/lustre/f72b.sanity-dg': Operation not permitted running as uid/gid/euid/egid 500/500/500/500, groups: [chmod] [2777] [/mnt/lustre/f72b.sanity-du] chmod: changing permissions of '/mnt/lustre/f72b.sanity-du': Operation not permitted running as uid/gid/euid/egid 500/500/500/500, groups: [chmod] [4777] [/mnt/lustre/f72b.sanity-fg] chmod: changing permissions of '/mnt/lustre/f72b.sanity-fg': Operation not permitted running as uid/gid/euid/egid 500/500/500/500, groups: [chmod] [4777] [/mnt/lustre/f72b.sanity-fu] chmod: changing permissions of '/mnt/lustre/f72b.sanity-fu': Operation not permitted running as uid/gid/euid/egid 500/500/500/500, groups: [chmod] [4777] [/mnt/lustre/f72b.sanity-dg] chmod: changing permissions of '/mnt/lustre/f72b.sanity-dg': Operation not permitted running as uid/gid/euid/egid 500/500/500/500, groups: [chmod] [4777] [/mnt/lustre/f72b.sanity-du] chmod: changing permissions of '/mnt/lustre/f72b.sanity-du': Operation not permitted PASS 72b (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 73: multiple MDC requests (should not deadlock) ========================================================== 21:02:51 (1713402171) striped dir -i1 -c2 -H all_char /mnt/lustre/d73-1 striped dir -i1 -c2 -H fnv_1a_64 /mnt/lustre/d73-2 multiop /mnt/lustre/d73-1/f73-1 vO_c TMPPIPE=/tmp/multiop_open_wait_pipe.7509 fail_loc=0x80000129 fail_loc=0 /mnt/lustre/d73-1/f73-1 has type file OK /mnt/lustre/d73-1/f73-2 has type file OK /mnt/lustre/d73-2/f73-3 has type file OK PASS 73 (28s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 74a: ldlm_enqueue freed-export error path, ls (shouldn't LBUG) ========================================================== 21:03:21 (1713402201) fail_loc=0x8000030e /mnt/lustre/f74a fail_loc=0 PASS 74a (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 74b: ldlm_enqueue freed-export error path, touch (shouldn't LBUG) ========================================================== 21:03:25 (1713402205) fail_loc=0x8000030e fail_loc=0 PASS 74b (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 74c: ldlm_lock_create error path, (shouldn't LBUG) ========================================================== 21:03:28 (1713402208) fail_loc=0x319 touch: cannot touch '/mnt/lustre/f74c.sanity': No such file or directory fail_loc=0 PASS 74c (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 76a: confirm clients recycle inodes properly ============================================================== 21:03:31 (1713402211) before slab objects: 83 created: 512, after slab objects: 83 PASS 76a (19s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 76b: confirm clients recycle directory inodes properly ============================================================== 21:03:51 (1713402231) slab objects before: 83, after: 83 PASS 76b (14s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 77a: normal checksum read/write operation ========================================================== 21:04:07 (1713402247) 8+0 records in 8+0 records out 8388608 bytes (8.4 MB) copied, 0.0750812 s, 112 MB/s 8+0 records in 8+0 records out 8388608 bytes (8.4 MB) copied, 0.211327 s, 39.7 MB/s PASS 77a (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 77b: checksum error on client write, read ========================================================== 21:04:11 (1713402251) fail_loc=0x80000409 8+0 records in 8+0 records out 8388608 bytes (8.4 MB) copied, 0.257281 s, 32.6 MB/s fail_loc=0 set checksum type to crc32, rc = 0 fail_loc=0x80000408 fail_loc=0 set checksum type to adler, rc = 0 fail_loc=0x80000408 fail_loc=0 set checksum type to crc32c, rc = 0 fail_loc=0x80000408 fail_loc=0 set checksum type to t10ip512, rc = 0 fail_loc=0x80000408 fail_loc=0 set checksum type to t10ip4K, rc = 0 fail_loc=0x80000408 fail_loc=0 set checksum type to t10crc512, rc = 0 fail_loc=0x80000408 fail_loc=0 set checksum type to t10crc4K, rc = 0 fail_loc=0x80000408 fail_loc=0 set checksum type to crc32c, rc = 0 PASS 77b (15s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 77c: checksum error on client read with debug ========================================================== 21:04:28 (1713402268) 8+0 records in 8+0 records out 8388608 bytes (8.4 MB) copied, 0.214714 s, 39.1 MB/s osc.lustre-OST0000-osc-ffff8800803dd000.checksum_dump=1 osc.lustre-OST0001-osc-ffff8800803dd000.checksum_dump=1 obdfilter.lustre-OST0000.checksum_dump=1 obdfilter.lustre-OST0001.checksum_dump=1 fail_loc=0x80000408 8+0 records in 8+0 records out 8388608 bytes (8.4 MB) copied, 1.40339 s, 6.0 MB/s fail_loc=0 osc.lustre-OST0000-osc-ffff8800803dd000.checksum_dump=0 osc.lustre-OST0001-osc-ffff8800803dd000.checksum_dump=0 obdfilter.lustre-OST0000.checksum_dump=0 obdfilter.lustre-OST0001.checksum_dump=0 PASS 77c (10s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 77d: checksum error on OST direct write, read ========================================================== 21:04:40 (1713402280) fail_loc=0x80000409 directio on /mnt/lustre/f77d.sanity for 8x1048576 bytes PASS fail_loc=0 fail_loc=0x80000408 directio on /mnt/lustre/f77d.sanity for 8x1048576 bytes PASS fail_loc=0 PASS 77d (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 77f: repeat checksum error on write (expect error) ========================================================== 21:04:45 (1713402285) set checksum type to crc32, rc = 0 fail_loc=0x409 directio on /mnt/lustre/f77f.sanity for 8x1048576 bytes Write error Input/output error (rc = -1, len = 8388608) fail_loc=0 set checksum type to adler, rc = 0 fail_loc=0x409 directio on /mnt/lustre/f77f.sanity for 8x1048576 bytes Write error Input/output error (rc = -1, len = 8388608) fail_loc=0 set checksum type to crc32c, rc = 0 fail_loc=0x409 directio on /mnt/lustre/f77f.sanity for 8x1048576 bytes Write error Input/output error (rc = -1, len = 8388608) fail_loc=0 set checksum type to t10ip512, rc = 0 fail_loc=0x409 directio on /mnt/lustre/f77f.sanity for 8x1048576 bytes Write error Input/output error (rc = -1, len = 8388608) fail_loc=0 set checksum type to t10ip4K, rc = 0 fail_loc=0x409 directio on /mnt/lustre/f77f.sanity for 8x1048576 bytes Write error Input/output error (rc = -1, len = 8388608) fail_loc=0 set checksum type to t10crc512, rc = 0 fail_loc=0x409 directio on /mnt/lustre/f77f.sanity for 8x1048576 bytes Write error Input/output error (rc = -1, len = 8388608) fail_loc=0 set checksum type to t10crc4K, rc = 0 fail_loc=0x409 directio on /mnt/lustre/f77f.sanity for 8x1048576 bytes Write error Input/output error (rc = -1, len = 8388608) fail_loc=0 set checksum type to crc32c, rc = 0 PASS 77f (395s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 77g: checksum error on OST write, read ==== 21:11:23 (1713402683) fail_loc=0x8000021a 8+0 records in 8+0 records out 8388608 bytes (8.4 MB) copied, 0.24466 s, 34.3 MB/s fail_loc=0 fail_loc=0x8000021b fail_loc=0 PASS 77g (6s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 77k: enable/disable checksum correctly ==== 21:11:30 (1713402690) remount client, checksum should be 0 Stopping client oleg146-client.virtnet /mnt/lustre (opts:) Starting client: oleg146-client.virtnet: -o user_xattr,flock oleg146-server@tcp:/lustre /mnt/lustre Waiting 90s for '1' Updated after 2s: want '1' got '1' remount client, checksum should be 1 Stopping client oleg146-client.virtnet /mnt/lustre (opts:) Starting client: oleg146-client.virtnet: -o user_xattr,flock oleg146-server@tcp:/lustre /mnt/lustre remount client with option checksum, checksum should be 1 192.168.201.146@tcp:/lustre /mnt/lustre lustre rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project 0 0 Stopping client oleg146-client.virtnet /mnt/lustre (opts:) Starting client: oleg146-client.virtnet: -o user_xattr,flock,checksum oleg146-server@tcp:/lustre /mnt/lustre remount client with option nochecksum, checksum should be 0 192.168.201.146@tcp:/lustre /mnt/lustre lustre rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project 0 0 Stopping client oleg146-client.virtnet /mnt/lustre (opts:) Starting client: oleg146-client.virtnet: -o user_xattr,flock,nochecksum oleg146-server@tcp:/lustre /mnt/lustre Stopping client oleg146-client.virtnet /mnt/lustre (opts:) Starting client: oleg146-client.virtnet: -o user_xattr,flock oleg146-server@tcp:/lustre /mnt/lustre Waiting 90s for '0' PASS 77k (7s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 77l: preferred checksum type is remembered after reconnected ========================================================== 21:11:39 (1713402699) osc.lustre-OST0000-osc-ffff8800803d8800.idle_timeout=10 osc.lustre-OST0001-osc-ffff8800803d8800.idle_timeout=10 error: set_param: setting /proc/fs/lustre/osc/lustre-OST0000-osc-ffff8800803d8800/checksum_type=invalid: Invalid argument error: set_param: setting /proc/fs/lustre/osc/lustre-OST0001-osc-ffff8800803d8800/checksum_type=invalid: Invalid argument error: set_param: setting 'osc/*osc-[^mM]*/checksum_type'='invalid': Invalid argument set checksum type to invalid, rc = 22 set checksum type to crc32, rc = 0 ldlm.namespaces.lustre-OST0000-osc-ffff8800803d8800.lru_size=400 ldlm.namespaces.lustre-OST0001-osc-ffff8800803d8800.lru_size=400 oleg146-client.virtnet: executing wait_import_state IDLE osc.lustre-OST0000-osc-ffff8800803d8800.ost_server_uuid 50 osc.lustre-OST0000-osc-ffff8800803d8800.ost_server_uuid in IDLE state after 4 sec oleg146-client.virtnet: executing wait_import_state FULL osc.lustre-OST0000-osc-ffff8800803d8800.ost_server_uuid 50 osc.lustre-OST0000-osc-ffff8800803d8800.ost_server_uuid in FULL state after 0 sec set checksum type to adler, rc = 0 ldlm.namespaces.lustre-OST0000-osc-ffff8800803d8800.lru_size=400 ldlm.namespaces.lustre-OST0001-osc-ffff8800803d8800.lru_size=400 oleg146-client.virtnet: executing wait_import_state IDLE osc.lustre-OST0000-osc-ffff8800803d8800.ost_server_uuid 50 osc.lustre-OST0000-osc-ffff8800803d8800.ost_server_uuid in IDLE state after 12 sec oleg146-client.virtnet: executing wait_import_state FULL osc.lustre-OST0000-osc-ffff8800803d8800.ost_server_uuid 50 osc.lustre-OST0000-osc-ffff8800803d8800.ost_server_uuid in FULL state after 0 sec set checksum type to crc32c, rc = 0 ldlm.namespaces.lustre-OST0000-osc-ffff8800803d8800.lru_size=400 ldlm.namespaces.lustre-OST0001-osc-ffff8800803d8800.lru_size=400 oleg146-client.virtnet: executing wait_import_state IDLE osc.lustre-OST0000-osc-ffff8800803d8800.ost_server_uuid 50 osc.lustre-OST0000-osc-ffff8800803d8800.ost_server_uuid in IDLE state after 13 sec oleg146-client.virtnet: executing wait_import_state FULL osc.lustre-OST0000-osc-ffff8800803d8800.ost_server_uuid 50 osc.lustre-OST0000-osc-ffff8800803d8800.ost_server_uuid in FULL state after 0 sec set checksum type to t10ip512, rc = 0 ldlm.namespaces.lustre-OST0000-osc-ffff8800803d8800.lru_size=400 ldlm.namespaces.lustre-OST0001-osc-ffff8800803d8800.lru_size=400 oleg146-client.virtnet: executing wait_import_state IDLE osc.lustre-OST0000-osc-ffff8800803d8800.ost_server_uuid 50 osc.lustre-OST0000-osc-ffff8800803d8800.ost_server_uuid in IDLE state after 12 sec oleg146-client.virtnet: executing wait_import_state FULL osc.lustre-OST0000-osc-ffff8800803d8800.ost_server_uuid 50 osc.lustre-OST0000-osc-ffff8800803d8800.ost_server_uuid in FULL state after 0 sec set checksum type to t10ip4K, rc = 0 ldlm.namespaces.lustre-OST0000-osc-ffff8800803d8800.lru_size=400 ldlm.namespaces.lustre-OST0001-osc-ffff8800803d8800.lru_size=400 oleg146-client.virtnet: executing wait_import_state IDLE osc.lustre-OST0000-osc-ffff8800803d8800.ost_server_uuid 50 osc.lustre-OST0000-osc-ffff8800803d8800.ost_server_uuid in IDLE state after 11 sec oleg146-client.virtnet: executing wait_import_state FULL osc.lustre-OST0000-osc-ffff8800803d8800.ost_server_uuid 50 osc.lustre-OST0000-osc-ffff8800803d8800.ost_server_uuid in FULL state after 0 sec set checksum type to t10crc512, rc = 0 ldlm.namespaces.lustre-OST0000-osc-ffff8800803d8800.lru_size=400 ldlm.namespaces.lustre-OST0001-osc-ffff8800803d8800.lru_size=400 oleg146-client.virtnet: executing wait_import_state IDLE osc.lustre-OST0000-osc-ffff8800803d8800.ost_server_uuid 50 osc.lustre-OST0000-osc-ffff8800803d8800.ost_server_uuid in IDLE state after 12 sec oleg146-client.virtnet: executing wait_import_state FULL osc.lustre-OST0000-osc-ffff8800803d8800.ost_server_uuid 50 osc.lustre-OST0000-osc-ffff8800803d8800.ost_server_uuid in FULL state after 0 sec set checksum type to t10crc4K, rc = 0 ldlm.namespaces.lustre-OST0000-osc-ffff8800803d8800.lru_size=400 ldlm.namespaces.lustre-OST0001-osc-ffff8800803d8800.lru_size=400 oleg146-client.virtnet: executing wait_import_state IDLE osc.lustre-OST0000-osc-ffff8800803d8800.ost_server_uuid 50 osc.lustre-OST0000-osc-ffff8800803d8800.ost_server_uuid in IDLE state after 11 sec oleg146-client.virtnet: executing wait_import_state FULL osc.lustre-OST0000-osc-ffff8800803d8800.ost_server_uuid 50 osc.lustre-OST0000-osc-ffff8800803d8800.ost_server_uuid in FULL state after 0 sec osc.lustre-OST0000-osc-ffff8800803d8800.idle_timeout=20 osc.lustre-OST0001-osc-ffff8800803d8800.idle_timeout=20 set checksum type to crc32c, rc = 0 PASS 77l (101s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 77m: Verify checksum_speed is correctly read ========================================================== 21:13:22 (1713402802) checksum_speed= adler32: 1507 crc32: 1859 crc32c: 10571 PASS 77m (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 77n: Verify read from a hole inside contiguous blocks with T10PI ========================================================== 21:13:26 (1713402806) 1+0 records in 1+0 records out 4096 bytes (4.1 kB) copied, 0.00285742 s, 1.4 MB/s 1+0 records in 1+0 records out 4096 bytes (4.1 kB) copied, 0.00113308 s, 3.6 MB/s set checksum type to t10ip512, rc = 0 1+0 records in 1+0 records out 12288 bytes (12 kB) copied, 0.00896477 s, 1.4 MB/s set checksum type to t10ip4K, rc = 0 1+0 records in 1+0 records out 12288 bytes (12 kB) copied, 0.00917055 s, 1.3 MB/s set checksum type to t10crc512, rc = 0 1+0 records in 1+0 records out 12288 bytes (12 kB) copied, 0.00531836 s, 2.3 MB/s set checksum type to t10crc4K, rc = 0 1+0 records in 1+0 records out 12288 bytes (12 kB) copied, 0.00543576 s, 2.3 MB/s set checksum type to crc32c, rc = 0 PASS 77n (6s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 77o: Verify checksum_type for server (mdt and ofd(obdfilter)) ========================================================== 21:13:34 (1713402814) obdfilter.lustre-*.checksum_type: crc32 adler [crc32c] t10ip512 t10ip4K t10crc512 t10crc4K crc32 adler [crc32c] t10ip512 t10ip4K t10crc512 t10crc4K mdt.lustre-*.checksum_type: crc32 adler [crc32c] t10ip512 t10ip4K t10crc512 t10crc4K crc32 adler [crc32c] t10ip512 t10ip4K t10crc512 t10crc4K PASS 77o (5s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 78: handle large O_DIRECT writes correctly ====================================================================== 21:13:41 (1713402821) MemFree: 3317, Max file size: 600000 MemTotal: 3730 Mem to use for directio: 1737 Smallest OST: 3602800 File size: 32 directIO rdwr round 1 of 1 directio on /mnt/lustre/f78.sanity for 32x1048576 bytes PASS PASS 78 (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 79: df report consistency check ================================================================================= 21:13:47 (1713402827) Waiting for MDT destroys to complete PASS 79 (9s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 80: Page eviction is equally fast at high offsets too ========================================================== 21:13:59 (1713402839) 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0416703 s, 25.2 MB/s PASS 80 (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 81a: OST should retry write when get -ENOSPC ========================================================================= 21:14:04 (1713402844) fail_loc=0x80000228 PASS 81a (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 81b: OST should return -ENOSPC when retry still fails ================================================================= 21:14:09 (1713402849) fail_loc=0x228 write: No space left on device PASS 81b (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 99: cvs strange file/directory operations ========================================================== 21:14:14 (1713402854) striped dir -i1 -c2 -H all_char /mnt/lustre/d99.sanity.cvsroot running as uid/gid/euid/egid 500/500/500/500, groups: [cvs] [-d] [/mnt/lustre/d99.sanity.cvsroot] [init] running as uid/gid/euid/egid 500/500/500/500, groups: [cvs] [-d] [/mnt/lustre/d99.sanity.cvsroot] [import] [-m] [nomesg] [d99.sanity.reposname] [vtag] [rtag] N d99.sanity.reposname/README N d99.sanity.reposname/network N d99.sanity.reposname/netconsole N d99.sanity.reposname/functions No conflicts created by this import striped dir -i1 -c2 -H all_char /mnt/lustre/d99.sanity.reposname running as uid/gid/euid/egid 500/500/500/500, groups: [cvs] [-d] [/mnt/lustre/d99.sanity.cvsroot] [co] [d99.sanity.reposname] cvs checkout: Updating d99.sanity.reposname U d99.sanity.reposname/README U d99.sanity.reposname/functions U d99.sanity.reposname/netconsole U d99.sanity.reposname/network running as uid/gid/euid/egid 500/500/500/500, groups: [touch] [foo99] running as uid/gid/euid/egid 500/500/500/500, groups: [cvs] [add] [-m] [addmsg] [foo99] cvs add: scheduling file `foo99' for addition cvs add: use 'cvs commit' to add this file permanently running as uid/gid/euid/egid 500/500/500/500, groups: [cvs] [update] cvs update: Updating . A foo99 running as uid/gid/euid/egid 500/500/500/500, groups: [cvs] [commit] [-m] [nomsg] [foo99] RCS file: /mnt/lustre/d99.sanity.cvsroot/d99.sanity.reposname/foo99,v done Checking in foo99; /mnt/lustre/d99.sanity.cvsroot/d99.sanity.reposname/foo99,v <-- foo99 initial revision: 1.1 done PASS 99 (9s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 100: check local port using privileged port ========================================================== 21:14:24 (1713402864) PASS 100 (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 101a: check read-ahead for random reads === 21:14:28 (1713402868) nreads: 10000 file size: 96MB 53.067251s, 12.3496MB/s osc.lustre-OST0000-osc-ffff8800803d8800.rpc_stats= snapshot_time: 1713402934.438292881 secs.nsecs start_time: 1713402869.392928143 secs.nsecs elapsed_time: 65.045364738 secs.nsecs read RPCs in flight: 0 write RPCs in flight: 0 pending write pages: 0 pending read pages: 0 read write pages per rpc rpcs % cum % | rpcs % cum % 1: 1 0 0 | 3 10 10 2: 0 0 0 | 1 3 13 4: 0 0 0 | 0 0 13 8: 4 0 0 | 0 0 13 16: 6402 99 100 | 0 0 13 32: 0 0 100 | 0 0 13 64: 0 0 100 | 0 0 13 128: 0 0 100 | 0 0 13 256: 0 0 100 | 0 0 13 512: 0 0 100 | 0 0 13 1024: 0 0 100 | 25 86 100 read write rpcs in flight rpcs % cum % | rpcs % cum % 1: 6407 100 100 | 25 86 86 2: 0 0 100 | 3 10 96 3: 0 0 100 | 1 3 100 read write offset rpcs % cum % | rpcs % cum % 0: 5 0 0 | 5 17 17 1: 0 0 0 | 0 0 17 2: 0 0 0 | 0 0 17 4: 0 0 0 | 0 0 17 8: 0 0 0 | 0 0 17 16: 5 0 0 | 0 0 17 32: 8 0 0 | 0 0 17 64: 19 0 0 | 0 0 17 128: 32 0 1 | 0 0 17 256: 71 1 2 | 0 0 17 512: 134 2 4 | 0 0 17 1024: 260 4 8 | 1 3 20 2048: 563 8 17 | 2 6 27 4096: 1083 16 34 | 4 13 41 8192: 2180 34 68 | 8 27 68 16384: 2047 31 100 | 9 31 100 osc.lustre-OST0001-osc-ffff8800803d8800.rpc_stats= snapshot_time: 1713402934.438577077 secs.nsecs start_time: 1713402869.392995167 secs.nsecs elapsed_time: 65.045581910 secs.nsecs read RPCs in flight: 0 write RPCs in flight: 0 pending write pages: 0 pending read pages: 0 read write pages per rpc rpcs % cum % | rpcs % cum % 1: 0 0 0 | 1 33 33 2: 0 0 0 | 1 33 66 4: 0 0 0 | 0 0 66 8: 0 0 0 | 1 33 100 read write rpcs in flight rpcs % cum % | rpcs % cum % 1: 0 0 0 | 1 33 33 2: 0 0 0 | 1 33 66 3: 0 0 0 | 1 33 100 read write offset rpcs % cum % | rpcs % cum % 0: 0 0 0 | 3 100 100 llite.lustre-ffff8800803d8800.read_ahead_stats= snapshot_time 1713402934.443536884 secs.nsecs start_time 1713402869.395714788 secs.nsecs elapsed_time 65.047822096 secs.nsecs hits 96041 samples [pages] misses 6407 samples [pages] readpage_not_consecutive 9991 samples [pages] zero_size_window 96042 samples [pages] failed_to_fast_read 6407 samples [pages] readahead_pages 6406 samples [pages] 5 15 96041 PASS 101a (68s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 101b: check stride-io mode read-ahead =========================================================================== 21:15:38 (1713402938) striped dir -i1 -c2 -H fnv_1a_64 /mnt/lustre/d101b.sanity 8.297123s, 1.01103MB/s Read-ahead success for size 8192 7.714996s, 1.08731MB/s Read-ahead success for size 16384 7.382256s, 1.13632MB/s Read-ahead success for size 32768 7.429357s, 1.12912MB/s Read-ahead success for size 65536 7.232131s, 1.15991MB/s Read-ahead success for size 131072 6.547826s, 1.28113MB/s Read-ahead success for size 262144 6.844453s, 1.22561MB/s Read-ahead success for size 524288 6.073370s, 1.38121MB/s Read-ahead success for size 1048576 PASS 101b (64s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 101c: check stripe_size aligned read-ahead ========================================================== 21:16:44 (1713403004) striped dir -i1 -c2 -H all_char /mnt/lustre/d101c.sanity osc.lustre-OST0000-osc-ffff8800803d8800.rpc_stats=0 osc.lustre-OST0001-osc-ffff8800803d8800.rpc_stats=0 16.290360s, 40.2299MB/s osc.lustre-OST0000-osc-ffff8800803d8800.rpc_stats= snapshot_time: 1713403025.188065884 secs.nsecs start_time: 1713403008.880703550 secs.nsecs elapsed_time: 16.307362334 secs.nsecs read RPCs in flight: 0 write RPCs in flight: 0 pending write pages: 0 pending read pages: 0 read write pages per rpc rpcs % cum % | rpcs % cum % 1: 0 0 0 | 0 0 0 2: 0 0 0 | 0 0 0 4: 0 0 0 | 0 0 0 8: 0 0 0 | 0 0 0 16: 798 100 100 | 0 0 0 read write rpcs in flight rpcs % cum % | rpcs % cum % 1: 798 100 100 | 0 0 0 read write offset rpcs % cum % | rpcs % cum % 0: 1 0 0 | 0 0 0 1: 0 0 0 | 0 0 0 2: 0 0 0 | 0 0 0 4: 0 0 0 | 0 0 0 8: 0 0 0 | 0 0 0 16: 1 0 0 | 0 0 0 32: 2 0 0 | 0 0 0 64: 4 0 1 | 0 0 0 128: 8 1 2 | 0 0 0 256: 16 2 4 | 0 0 0 512: 32 4 8 | 0 0 0 1024: 63 7 15 | 0 0 0 2048: 128 16 31 | 0 0 0 4096: 255 31 63 | 0 0 0 8192: 288 36 100 | 0 0 0 osc.lustre-OST0001-osc-ffff8800803d8800.rpc_stats= snapshot_time: 1713403025.188278822 secs.nsecs start_time: 1713403008.880787508 secs.nsecs elapsed_time: 16.307491314 secs.nsecs read RPCs in flight: 0 write RPCs in flight: 0 pending write pages: 0 pending read pages: 0 read write pages per rpc rpcs % cum % | rpcs % cum % 1: 0 0 0 | 0 0 0 2: 0 0 0 | 0 0 0 4: 0 0 0 | 0 0 0 8: 0 0 0 | 0 0 0 16: 799 100 100 | 0 0 0 read write rpcs in flight rpcs % cum % | rpcs % cum % 1: 799 100 100 | 0 0 0 read write offset rpcs % cum % | rpcs % cum % 0: 1 0 0 | 0 0 0 1: 0 0 0 | 0 0 0 2: 0 0 0 | 0 0 0 4: 0 0 0 | 0 0 0 8: 0 0 0 | 0 0 0 16: 1 0 0 | 0 0 0 32: 2 0 0 | 0 0 0 64: 4 0 1 | 0 0 0 128: 8 1 2 | 0 0 0 256: 16 2 4 | 0 0 0 512: 32 4 8 | 0 0 0 1024: 64 8 16 | 0 0 0 2048: 128 16 32 | 0 0 0 4096: 255 31 63 | 0 0 0 8192: 288 36 100 | 0 0 0 osc.lustre-OST0000-osc-ffff8800803d8800.rpc_stats check passed! osc.lustre-OST0001-osc-ffff8800803d8800.rpc_stats check passed! PASS 101c (24s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 101d: file read with and without read-ahead enabled ========================================================== 21:17:10 (1713403030) Create test file /mnt/lustre/f101d.sanity size 80M, 6939M free 80+0 records in 80+0 records out 83886080 bytes (84 MB) copied, 2.0441 s, 41.0 MB/s Cancel LRU locks on lustre client to flush the client cache Disable read-ahead 0 Reading the test file /mnt/lustre/f101d.sanity with read-ahead disabled Cancel LRU locks on lustre client to flush the client cache Enable read-ahead with 40MB Reading the test file /mnt/lustre/f101d.sanity with read-ahead enabled read-ahead disabled time read '84.9274' read-ahead enabled time read '27.3577' Waiting for MDT destroys to complete PASS 101d (126s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 101e: check read-ahead for small read(1k) for small files(500k) ========================================================== 21:19:18 (1713403158) Creating 100 500K test files Cancel LRU locks on lustre client to flush the client cache Reset readahead stats llite.lustre-ffff8800803d8800.max_cached_mb= users: 6 max_cached_mb: 1865 used_mb: 49 unused_mb: 1816 reclaim_count: 132982 max_read_ahead_mb: 256 used_read_ahead_mb: 0 llite.lustre-ffff8800803d8800.read_ahead_stats= snapshot_time 1713403179.684932889 secs.nsecs start_time 1713403170.755385030 secs.nsecs elapsed_time 8.929547859 secs.nsecs hits 12300 samples [pages] misses 200 samples [pages] zero_size_window 100 samples [pages] failed_to_fast_read 200 samples [pages] readahead_pages 100 samples [pages] 123 123 12300 PASS 101e (25s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 101f: check mmap read performance ========= 21:19:44 (1713403184) /opt/iozone/bin/iozone debug=reada mmap Cancel LRU locks on lustre client to flush the client cache Reset readahead stats mmap read the file with small block size checking missing pages llite.lustre-ffff8800803d8800.read_ahead_stats= snapshot_time 1713403185.688088365 secs.nsecs start_time 1713403185.621175635 secs.nsecs elapsed_time 0.066912730 secs.nsecs debug=trace inode super iotrace malloc cache info ioctl neterror net warning buffs other dentry nettrace page dlmtrace error emerg ha rpctrace vfstrace reada mmap config console quota sec lfsck hsm snapshot layout PASS 101f (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 101g: Big bulk(4/16 MiB) readahead ======== 21:19:48 (1713403188) remount client to enable new RPC size Stopping client oleg146-client.virtnet /mnt/lustre (opts:) Starting client: oleg146-client.virtnet: -o user_xattr,flock oleg146-server@tcp:/lustre /mnt/lustre osc.lustre-OST0000-osc-ffff88012a51d000.max_pages_per_rpc=16M osc.lustre-OST0001-osc-ffff88012a51d000.max_pages_per_rpc=16M 10+0 records in 10+0 records out 167772160 bytes (168 MB) copied, 3.93573 s, 42.6 MB/s 10+0 records in 10+0 records out 167772160 bytes (168 MB) copied, 2.98474 s, 56.2 MB/s osc.lustre-OST0000-osc-ffff88012a51d000.max_pages_per_rpc=8M osc.lustre-OST0001-osc-ffff88012a51d000.max_pages_per_rpc=8M 10+0 records in 10+0 records out 83886080 bytes (84 MB) copied, 2.14019 s, 39.2 MB/s 10+0 records in 10+0 records out 83886080 bytes (84 MB) copied, 2.23151 s, 37.6 MB/s osc.lustre-OST0000-osc-ffff88012a51d000.max_pages_per_rpc=4M osc.lustre-OST0001-osc-ffff88012a51d000.max_pages_per_rpc=4M 10+0 records in 10+0 records out 41943040 bytes (42 MB) copied, 1.15052 s, 36.5 MB/s 10+0 records in 10+0 records out 41943040 bytes (42 MB) copied, 0.834155 s, 50.3 MB/s Stopping client oleg146-client.virtnet /mnt/lustre (opts:) Starting client: oleg146-client.virtnet: -o user_xattr,flock oleg146-server@tcp:/lustre /mnt/lustre PASS 101g (23s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 101h: Readahead should cover current read window ========================================================== 21:20:14 (1713403214) 70+0 records in 70+0 records out 73400320 bytes (73 MB) copied, 1.83281 s, 40.0 MB/s Cancel LRU locks on lustre client to flush the client cache Reset readahead stats Read 10M of data but cross 64M bundary 1+0 records in 1+0 records out 10485760 bytes (10 MB) copied, 0.275197 s, 38.1 MB/s PASS 101h (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 101i: allow current readahead to exceed reservation ========================================================== 21:20:20 (1713403220) 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 0.264642 s, 39.6 MB/s llite.lustre-ffff88012b776800.max_read_ahead_per_file_mb=1 Reset readahead stats llite.lustre-ffff88012b776800.read_ahead_stats=0 5+0 records in 5+0 records out 10485760 bytes (10 MB) copied, 0.39207 s, 26.7 MB/s llite.lustre-ffff88012b776800.read_ahead_stats= snapshot_time 1713403222.005822947 secs.nsecs start_time 1713403221.601152255 secs.nsecs elapsed_time 0.404670692 secs.nsecs hits 2555 samples [pages] misses 5 samples [pages] zero_size_window 2555 samples [pages] failed_to_fast_read 6 samples [pages] readahead_pages 5 samples [pages] 511 511 2555 llite.lustre-ffff88012b776800.max_read_ahead_per_file_mb=64 PASS 101i (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 101j: A complete read block should be submitted when no RA ========================================================== 21:20:25 (1713403225) Disable read-ahead 16+0 records in 16+0 records out 16777216 bytes (17 MB) copied, 0.417559 s, 40.2 MB/s Reset readahead stats 4096+0 records in 4096+0 records out 16777216 bytes (17 MB) copied, 13.4993 s, 1.2 MB/s snapshot_time 1713403240.354675181 secs.nsecs start_time 1713403226.824474101 secs.nsecs elapsed_time 13.530201080 secs.nsecs failed_to_fast_read 4096 samples [pages] Reset readahead stats 16+0 records in 16+0 records out 16777216 bytes (17 MB) copied, 0.629474 s, 26.7 MB/s snapshot_time 1713403241.197060142 secs.nsecs start_time 1713403240.541603050 secs.nsecs elapsed_time 0.655457092 secs.nsecs failed_to_fast_read 16 samples [pages] Reset readahead stats 1+0 records in 1+0 records out 16777216 bytes (17 MB) copied, 0.255669 s, 65.6 MB/s snapshot_time 1713403241.645948069 secs.nsecs start_time 1713403241.369998542 secs.nsecs elapsed_time 0.275949527 secs.nsecs failed_to_fast_read 1 samples [pages] PASS 101j (18s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 101m: read ahead for small file and last stripe of the file ========================================================== 21:20:45 (1713403245) keep default fallocate mode: 0 Test readahead: size=4096 ramax= iosz=1048576 short read: 0 ->+ 1048576 -> 4096 4096 short read: 4096 ->+ 1044480 -> 4096 0 snapshot_time 1713403246.382580178 secs.nsecs start_time 1713403246.294293301 secs.nsecs elapsed_time 0.088286877 secs.nsecs misses 2 samples [pages] zero_size_window 2 samples [pages] failed_to_fast_read 1 samples [pages] Test readahead: size=16384 ramax= iosz=1048576 short read: 0 ->+ 1048576 -> 16384 16384 short read: 16384 ->+ 1032192 -> 16384 0 snapshot_time 1713403246.503384815 secs.nsecs start_time 1713403246.417250463 secs.nsecs elapsed_time 0.086134352 secs.nsecs hits 3 samples [pages] misses 2 samples [pages] zero_size_window 4 samples [pages] failed_to_fast_read 1 samples [pages] readahead_pages 1 samples [pages] 3 3 3 Test readahead: size=16385 ramax= iosz=1048576 short read: 0 ->+ 1048576 -> 16385 16385 short read: 16385 ->+ 1032191 -> 16385 0 snapshot_time 1713403246.621483454 secs.nsecs start_time 1713403246.538107826 secs.nsecs elapsed_time 0.083375628 secs.nsecs hits 4 samples [pages] misses 1 samples [pages] zero_size_window 4 samples [pages] readahead_to_eof 1 samples [pages] failed_to_fast_read 1 samples [pages] readahead_pages 1 samples [pages] 4 4 4 Test readahead: size=16383 ramax= iosz=1048576 short read: 0 ->+ 1048576 -> 16383 16383 short read: 16383 ->+ 1032193 -> 16383 0 snapshot_time 1713403246.734463921 secs.nsecs start_time 1713403246.656020235 secs.nsecs elapsed_time 0.078443686 secs.nsecs hits 3 samples [pages] misses 1 samples [pages] zero_size_window 3 samples [pages] readahead_to_eof 1 samples [pages] failed_to_fast_read 1 samples [pages] readahead_pages 1 samples [pages] 3 3 3 Test readahead: size=1048577 ramax= iosz=2097152 short read: 0 ->+ 2097152 -> 1048577 1048577 short read: 1048577 ->+ 1048575 -> 1048577 0 snapshot_time 1713403246.916593860 secs.nsecs start_time 1713403246.771446033 secs.nsecs elapsed_time 0.145147827 secs.nsecs hits 256 samples [pages] misses 1 samples [pages] zero_size_window 256 samples [pages] readahead_to_eof 1 samples [pages] failed_to_fast_read 1 samples [pages] readahead_pages 1 samples [pages] 256 256 256 Test readahead: size=1064960 ramax= iosz=2097152 short read: 0 ->+ 2097152 -> 1064960 1064960 short read: 1064960 ->+ 1032192 -> 1064960 0 snapshot_time 1713403247.104326305 secs.nsecs start_time 1713403246.959955085 secs.nsecs elapsed_time 0.144371220 secs.nsecs hits 259 samples [pages] misses 2 samples [pages] zero_size_window 260 samples [pages] failed_to_fast_read 1 samples [pages] readahead_pages 1 samples [pages] 259 259 259 Test readahead: size=1064960 ramax= iosz=2097152 short read: 0 ->+ 2097152 -> 1064960 1064960 short read: 1064960 ->+ 1032192 -> 1064960 0 snapshot_time 1713403247.342050994 secs.nsecs start_time 1713403247.197783258 secs.nsecs elapsed_time 0.144267736 secs.nsecs hits 259 samples [pages] misses 2 samples [pages] zero_size_window 260 samples [pages] failed_to_fast_read 1 samples [pages] readahead_pages 1 samples [pages] 259 259 259 Test readahead: size=2113536 ramax= iosz=3145728 short read: 0 ->+ 3145728 -> 2113536 2113536 short read: 2113536 ->+ 1032192 -> 2113536 0 snapshot_time 1713403247.617740682 secs.nsecs start_time 1713403247.425173202 secs.nsecs elapsed_time 0.192567480 secs.nsecs hits 515 samples [pages] misses 2 samples [pages] zero_size_window 516 samples [pages] failed_to_fast_read 1 samples [pages] readahead_pages 1 samples [pages] 515 515 515 Test readahead: size=4210688 ramax= iosz=5242880 short read: 0 ->+ 5242880 -> 4210688 4210688 short read: 4210688 ->+ 1032192 -> 4210688 0 snapshot_time 1713403247.963336141 secs.nsecs start_time 1713403247.711579022 secs.nsecs elapsed_time 0.251757119 secs.nsecs hits 1026 samples [pages] misses 3 samples [pages] zero_size_window 1027 samples [pages] failed_to_fast_read 1 samples [pages] readahead_pages 2 samples [pages] 3 1023 1026 PASS 101m (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 102a: user xattr test ============================================================================================ 21:20:52 (1713403252) set/get xattr... trusted.name1="value1" user.author1="author1" listxattr... remove xattr... set lustre special xattr ... lfs setstripe: setstripe error for '/mnt/lustre/f102a.sanity': stripe already set getfattr: Removing leading '/' from absolute path names setfattr: /mnt/lustre/f102a.sanity: Numerical result out of range PASS 102a (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 102b: getfattr/setfattr for trusted.lov EAs ========================================================== 21:20:57 (1713403257) test layout '-S 65536 -i 1 -c 2' lmm_stripe_count: 2 lmm_stripe_size: 65536 lmm_pattern: raid0 lmm_layout_gen: 0 lmm_stripe_offset: 1 lmm_objects: - l_ost_idx: 1 l_fid: 0x2c0000401:0xf38:0x0 - l_ost_idx: 0 l_fid: 0x280000401:0xf46:0x0 get/set/list trusted.lov xattr ... getfattr: Removing leading '/' from absolute path names setfattr 4 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 6 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 8 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 10 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 12 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 14 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 16 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 18 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 20 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 22 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 24 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 26 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 28 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 30 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 32 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 34 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 36 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 38 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 40 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 42 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 44 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 46 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 48 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 50 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 52 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 54 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 56 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 58 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 60 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 62 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 64 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 66 /mnt/lustre/f102b.sanity.2 setfattr 68 /mnt/lustre/f102b.sanity.2 setfattr 70 /mnt/lustre/f102b.sanity.2 setfattr 72 /mnt/lustre/f102b.sanity.2 setfattr 74 /mnt/lustre/f102b.sanity.2 setfattr 76 /mnt/lustre/f102b.sanity.2 setfattr 78 /mnt/lustre/f102b.sanity.2 setfattr 80 /mnt/lustre/f102b.sanity.2 setfattr 82 /mnt/lustre/f102b.sanity.2 setfattr 84 /mnt/lustre/f102b.sanity.2 setfattr 86 /mnt/lustre/f102b.sanity.2 setfattr 88 /mnt/lustre/f102b.sanity.2 setfattr 90 /mnt/lustre/f102b.sanity.2 setfattr 92 /mnt/lustre/f102b.sanity.2 setfattr 94 /mnt/lustre/f102b.sanity.2 setfattr 96 /mnt/lustre/f102b.sanity.2 setfattr 98 /mnt/lustre/f102b.sanity.2 setfattr 100 /mnt/lustre/f102b.sanity.2 setfattr 102 /mnt/lustre/f102b.sanity.2 setfattr 104 /mnt/lustre/f102b.sanity.2 setfattr 106 /mnt/lustre/f102b.sanity.2 setfattr 108 /mnt/lustre/f102b.sanity.2 setfattr 110 /mnt/lustre/f102b.sanity.2 setfattr 112 /mnt/lustre/f102b.sanity.2 setfattr 114 /mnt/lustre/f102b.sanity.2 setfattr 116 /mnt/lustre/f102b.sanity.2 setfattr 118 /mnt/lustre/f102b.sanity.2 setfattr 120 /mnt/lustre/f102b.sanity.2 setfattr 122 /mnt/lustre/f102b.sanity.2 setfattr 124 /mnt/lustre/f102b.sanity.2 setfattr 126 /mnt/lustre/f102b.sanity.2 setfattr 128 /mnt/lustre/f102b.sanity.2 setfattr 130 /mnt/lustre/f102b.sanity.2 setfattr 132 /mnt/lustre/f102b.sanity.2 setfattr 134 /mnt/lustre/f102b.sanity.2 setfattr 136 /mnt/lustre/f102b.sanity.2 setfattr 138 /mnt/lustre/f102b.sanity.2 setfattr 140 /mnt/lustre/f102b.sanity.2 setfattr 142 /mnt/lustre/f102b.sanity.2 setfattr 144 /mnt/lustre/f102b.sanity.2 setfattr 146 /mnt/lustre/f102b.sanity.2 setfattr 148 /mnt/lustre/f102b.sanity.2 setfattr 150 /mnt/lustre/f102b.sanity.2 setfattr 152 /mnt/lustre/f102b.sanity.2 setfattr 154 /mnt/lustre/f102b.sanity.2 setfattr 156 /mnt/lustre/f102b.sanity.2 setfattr 158 /mnt/lustre/f102b.sanity.2 setfattr 160 /mnt/lustre/f102b.sanity.2 setfattr 162 /mnt/lustre/f102b.sanity.2 test layout '-E 1M -S 65536 -i 1 -c 2 -Eeof -S4M' lcm_layout_gen: 2 lcm_mirror_count: 1 lcm_entry_count: 2 component0: lcme_id: 1 lcme_mirror_id: 0 lcme_flags: init lcme_extent.e_start: 0 lcme_extent.e_end: 1048576 sub_layout: lmm_stripe_count: 2 lmm_stripe_size: 65536 lmm_pattern: raid0 lmm_layout_gen: 0 lmm_stripe_offset: 1 lmm_objects: - l_ost_idx: 1 l_fid: 0x2c0000401:0xf3a:0x0 - l_ost_idx: 0 l_fid: 0x280000401:0xf48:0x0 component1: lcme_id: 2 lcme_mirror_id: 0 lcme_flags: 0 lcme_extent.e_start: 1048576 lcme_extent.e_end: EOF sub_layout: lmm_stripe_count: 2 lmm_stripe_size: 4194304 lmm_pattern: raid0 lmm_layout_gen: 0 lmm_stripe_offset: -1 get/set/list trusted.lov xattr ... getfattr: Removing leading '/' from absolute path names setfattr 4 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 6 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 8 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 10 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 12 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 14 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 16 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 18 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 20 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 22 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 24 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 26 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 28 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 30 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 32 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 34 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 36 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 38 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 40 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 42 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 44 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 46 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 48 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 50 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 52 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 54 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 56 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 58 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 60 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 62 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 64 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 66 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 68 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 70 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 72 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 74 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 76 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 78 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 80 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 82 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 84 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 86 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 88 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 90 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 92 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 94 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 96 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 98 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 100 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 102 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 104 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 106 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 108 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 110 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 112 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 114 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 116 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 118 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 120 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 122 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 124 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 126 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 128 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 130 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 132 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 134 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 136 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 138 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 140 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 142 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 144 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 146 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 148 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 150 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 152 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 154 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 156 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 158 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 160 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 162 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 164 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 166 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 168 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 170 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 172 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 174 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 176 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 178 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 180 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 182 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 184 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 186 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 188 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 190 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 192 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 194 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 196 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 198 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 200 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 202 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 204 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 206 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 208 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 210 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 212 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 214 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 216 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 218 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 220 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 222 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 224 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 226 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 228 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 230 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 232 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 234 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 236 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 238 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 240 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 242 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 244 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 246 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 248 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 250 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 252 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 254 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 256 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 258 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 260 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 262 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 264 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 266 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 268 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 270 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 272 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 274 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 276 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 278 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 280 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 282 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 284 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 286 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 288 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 290 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 292 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 294 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 296 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 298 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 300 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 302 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 304 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 306 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 308 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 310 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 312 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 314 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 316 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 318 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 320 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 322 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 324 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 326 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 328 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 330 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 332 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 334 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 336 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 338 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 340 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 342 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 344 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 346 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 348 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 350 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 352 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 354 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 356 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 358 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 360 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 362 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 364 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 366 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 368 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 370 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 372 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 374 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 376 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 378 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 380 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 382 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 384 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 386 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 388 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 390 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 392 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 394 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 396 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 398 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 400 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 402 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 404 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 406 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 408 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 410 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 412 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 414 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 416 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 418 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 420 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 422 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 424 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 426 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 428 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 430 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 432 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 434 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 436 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 438 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 440 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 442 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 444 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 446 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 448 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 450 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 452 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 454 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 456 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 458 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 460 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 462 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 464 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 466 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 468 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 470 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 472 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 474 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 476 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 478 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 480 /mnt/lustre/f102b.sanity.2 setfattr: /mnt/lustre/f102b.sanity.2: Numerical result out of range setfattr 482 /mnt/lustre/f102b.sanity.2 PASS 102b (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 102c: non-root getfattr/setfattr for lustre.lov EAs ===================================================================== 21:21:03 (1713403263) get/set/list lustre.lov xattr ... striped dir -i0 -c2 -H crush2 /mnt/lustre/d102c.sanity running as uid/gid/euid/egid 500/500/500/500, groups: [/home/green/git/lustre-release/lustre/utils/lfs] [setstripe] [-S] [65536] [-i] [1] [-c] [2] [/mnt/lustre/d102c.sanity/f102c.sanity] running as uid/gid/euid/egid 500/500/500/500, groups: [/home/green/git/lustre-release/lustre/utils/lfs] [getstripe] [-c] [/mnt/lustre/d102c.sanity/f102c.sanity] lustre.lov=0s0AvRCwEAAAAZAAAAAAAAAAoEAAACAAAAAAABAAIAAAABBADAAgAAADwPAAAAAAAAAAAAAAEAAAABBACAAgAAAEoPAAAAAAAAAAAAAAAAAAA= running as uid/gid/euid/egid 500/500/500/500, groups: [mcreate] [/mnt/lustre/d102c.sanity/f102c.sanity2] running as uid/gid/euid/egid 500/500/500/500, groups: [setfattr] [-n] [lustre.lov] [-v] [0s0AvRCwEAAAAZAAAAAAAAAAoEAAACAAAAAAABAAIAAAABBADAAgAAADwPAAAAAAAAAAAAAAEAAAABBACAAgAAAEoPAAAAAAAAAAAAAAAAAAA=] [/mnt/lustre/d102c.sanity/f102c.sanity2] running as uid/gid/euid/egid 500/500/500/500, groups: [/home/green/git/lustre-release/lustre/utils/lfs] [getstripe] [-S] [/mnt/lustre/d102c.sanity/f102c.sanity2] running as uid/gid/euid/egid 500/500/500/500, groups: [/home/green/git/lustre-release/lustre/utils/lfs] [getstripe] [-c] [/mnt/lustre/d102c.sanity/f102c.sanity2] PASS 102c (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 102d: tar restore stripe info from tarfile,not keep osts ========================================================== 21:21:08 (1713403268) striped dir -i0 -c2 -H crush2 /mnt/lustre/d102d.sanity PASS 102d (7s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 102f: tar copy files, not keep osts ======= 21:21:17 (1713403277) striped dir -i0 -c2 -H crush /mnt/lustre/d102f.sanity striped dir -i0 -c2 -H crush2 /mnt/lustre/d102f.sanity.restore PASS 102f (7s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 102h: grow xattr from inside inode to external block ========================================================== 21:21:27 (1713403287) save trusted.big on /mnt/lustre/f102h.sanity save trusted.sml on /mnt/lustre/f102h.sanity grow trusted.sml on /mnt/lustre/f102h.sanity trusted.big still valid after growing trusted.sml PASS 102h (5s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 102ha: grow xattr from inside inode to external inode ========================================================== 21:21:34 (1713403294) setting xattr of max xattr size: 65536 save trusted.big on /mnt/lustre/f102ha.sanity save trusted.sml on /mnt/lustre/f102ha.sanity grow trusted.sml on /mnt/lustre/f102ha.sanity trusted.big still valid after growing trusted.sml setting xattr of > max xattr size: 65536 + 10 This should fail: save trusted.big on /mnt/lustre/f102ha.sanity setfattr: /mnt/lustre/f102ha.sanity: Argument list too long PASS 102ha (7s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 102i: lgetxattr test on symbolic link ====================================================================== 21:21:43 (1713403303) getfattr: Removing leading '/' from absolute path names # file: mnt/lustre/f102i.sanity trusted.lov=0s0AvRCwEAAAA1AAAAAAAAAAoEAAACAAAAAABAAAEAAAABBACAAgAAAF0PAAAAAAAAAAAAAAAAAAA= /mnt/lustre/f102i.sanitylink: trusted.lov: No such attribute PASS 102i (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 102j: non-root tar restore stripe info from tarfile, not keep osts ============================================================= 21:21:48 (1713403308) striped dir -i0 -c2 -H fnv_1a_64 /mnt/lustre/d102j.sanity running as uid/gid/euid/egid 500/500/500/500, groups: [/home/green/git/lustre-release/lustre/utils/lfs] [setstripe] [-S] [65536] [-i] [1] [-c] [2] [d102j.sanity] running as uid/gid/euid/egid 500/500/500/500, groups: [/home/green/git/lustre-release/lustre/utils/lfs] [setstripe] [-S] [65536] [-i] [0] [-c] [1] [file1-0-1] running as uid/gid/euid/egid 500/500/500/500, groups: [/home/green/git/lustre-release/lustre/utils/lfs] [setstripe] [-S] [65536] [-i] [1] [-c] [1] [file1-1-1] running as uid/gid/euid/egid 500/500/500/500, groups: [/home/green/git/lustre-release/lustre/utils/lfs] [setstripe] [-S] [65536] [-i] [0] [-c] [2] [file1-0-2] running as uid/gid/euid/egid 500/500/500/500, groups: [/home/green/git/lustre-release/lustre/utils/lfs] [setstripe] [-S] [65536] [-i] [1] [-c] [2] [file1-1-2] running as uid/gid/euid/egid 500/500/500/500, groups: [/home/green/git/lustre-release/lustre/utils/lfs] [setstripe] [-S] [131072] [-i] [0] [-c] [1] [file2-0-1] running as uid/gid/euid/egid 500/500/500/500, groups: [/home/green/git/lustre-release/lustre/utils/lfs] [setstripe] [-S] [131072] [-i] [1] [-c] [1] [file2-1-1] running as uid/gid/euid/egid 500/500/500/500, groups: [/home/green/git/lustre-release/lustre/utils/lfs] [setstripe] [-S] [131072] [-i] [0] [-c] [2] [file2-0-2] running as uid/gid/euid/egid 500/500/500/500, groups: [/home/green/git/lustre-release/lustre/utils/lfs] [setstripe] [-S] [131072] [-i] [1] [-c] [2] [file2-1-2] running as uid/gid/euid/egid 500/500/500/500, groups: [/home/green/git/lustre-release/lustre/utils/lfs] [setstripe] [-S] [196608] [-i] [0] [-c] [1] [file3-0-1] running as uid/gid/euid/egid 500/500/500/500, groups: [/home/green/git/lustre-release/lustre/utils/lfs] [setstripe] [-S] [196608] [-i] [1] [-c] [1] [file3-1-1] running as uid/gid/euid/egid 500/500/500/500, groups: [/home/green/git/lustre-release/lustre/utils/lfs] [setstripe] [-S] [196608] [-i] [0] [-c] [2] [file3-0-2] running as uid/gid/euid/egid 500/500/500/500, groups: [/home/green/git/lustre-release/lustre/utils/lfs] [setstripe] [-S] [196608] [-i] [1] [-c] [2] [file3-1-2] running as uid/gid/euid/egid 500/500/500/500, groups: [/home/green/git/lustre-release/lustre/utils/lfs] [setstripe] [-S] [262144] [-i] [0] [-c] [1] [file4-0-1] running as uid/gid/euid/egid 500/500/500/500, groups: [/home/green/git/lustre-release/lustre/utils/lfs] [setstripe] [-S] [262144] [-i] [1] [-c] [1] [file4-1-1] running as uid/gid/euid/egid 500/500/500/500, groups: [/home/green/git/lustre-release/lustre/utils/lfs] [setstripe] [-S] [262144] [-i] [0] [-c] [2] [file4-0-2] running as uid/gid/euid/egid 500/500/500/500, groups: [/home/green/git/lustre-release/lustre/utils/lfs] [setstripe] [-S] [262144] [-i] [1] [-c] [2] [file4-1-2] running as uid/gid/euid/egid 500/500/500/500, groups: [tar] [cf] [/tmp/f102.tar] [d102j.sanity] [--xattrs] running as uid/gid/euid/egid 500/500/500/500, groups: [tar] [xf] [/tmp/f102.tar] [-C] [/mnt/lustre/d102j.sanity] [--xattrs] [--xattrs-include=lustre.*] PASS 102j (6s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 102k: setfattr without parameter of value shouldn't cause a crash ========================================================== 21:21:57 (1713403317) striped dir -i0 -c2 -H crush /mnt/lustre/d102k.sanity PASS 102k (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 102l: listxattr size test ============================================================================================ 21:22:01 (1713403321) listxattr as user... PASS 102l (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 102m: Ensure listxattr fails on small bufffer ================================================================== 21:22:06 (1713403326) PASS 102m (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 102n: silently ignore setxattr on internal trusted xattrs ========================================================== 21:22:10 (1713403330) setfattr: /mnt/lustre/f102n.sanity.1: Numerical result out of range PASS 102n (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 102p: check setxattr(2) correctly fails without permission ========================================================== 21:22:16 (1713403336) setfacl as user... running as uid/gid/euid/egid 500/500/500/500, groups: [setfacl] [-m] [u:500:rwx] [/mnt/lustre/f102p.sanity] setfacl: /mnt/lustre/f102p.sanity: Operation not permitted setfattr as user... running as uid/gid/euid/egid 500/500/500/500, groups: [setfattr] [-x] [system.posix_acl_access] [/mnt/lustre/f102p.sanity] setfattr: /mnt/lustre/f102p.sanity: Operation not permitted PASS 102p (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 102q: flistxattr should not return trusted.link EAs for orphans ========================================================== 21:22:20 (1713403340) PASS 102q (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 102r: set EAs with empty values =========== 21:22:25 (1713403345) getfattr: Removing leading '/' from absolute path names # file: mnt/lustre/f102r.sanity user.f102r.sanity getfattr: Removing leading '/' from absolute path names # file: mnt/lustre/d102r.sanity user.d102r.sanity striped dir -i0 -c2 -H fnv_1a_64 /mnt/lustre/d102r.sanity getfattr: Removing leading '/' from absolute path names # file: mnt/lustre/d102r.sanity user.d102r.sanity PASS 102r (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 102s: getting nonexistent xattrs should fail ========================================================== 21:22:30 (1713403350) llite.lustre-ffff88012b776800.xattr_cache=0 /mnt/lustre/f102s.sanity: lustre.n102s: Operation not supported /mnt/lustre/f102s.sanity: security.n102s: No such attribute /mnt/lustre/f102s.sanity: system.n102s: Operation not supported /mnt/lustre/f102s.sanity: trusted.n102s: No such attribute /mnt/lustre/f102s.sanity: user.n102s: No such attribute llite.lustre-ffff88012b776800.xattr_cache=1 /mnt/lustre/f102s.sanity: lustre.n102s: No such attribute /mnt/lustre/f102s.sanity: security.n102s: No such attribute /mnt/lustre/f102s.sanity: system.n102s: Operation not supported /mnt/lustre/f102s.sanity: trusted.n102s: No such attribute /mnt/lustre/f102s.sanity: user.n102s: No such attribute PASS 102s (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 102t: zero length xattr values handled correctly ========================================================== 21:22:35 (1713403355) llite.lustre-ffff88012b776800.xattr_cache=0 llite.lustre-ffff88012b776800.xattr_cache=1 PASS 102t (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 103a: acl test ============================ 21:22:39 (1713403359) /usr/bin/setfacl mdt.lustre-MDT0000.job_xattr=NONE mdt.lustre-MDT0001.job_xattr=NONE uid=1(bin) gid=1(bin) groups=1(bin) uid=2(daemon) gid=2(daemon) groups=2(daemon),1(bin) users:x:100: Adding user daemon to group bin Adding user daemon to group bin performing cp with bin='bin' daemon='daemon' users='users'... [3] $ umask 022 -- ok [4] $ mkdir d -- ok [5] $ cd d -- ok [6] $ touch f -- ok [7] $ setfacl -m u:bin:rw f -- ok [8] $ ls -l f | awk -- '{ print $1 }' -- ok [11] $ cp f g -- ok [12] $ ls -l g | awk -- '{sub(/\./, "", $1); print $1 }' -- ok [15] $ rm g -- ok [16] $ cp -p f g -- ok [17] $ ls -l f | awk -- '{ print $1 }' -- ok [20] $ mkdir h -- ok [21] $ echo blubb > h/x -- ok [22] $ cp -rp h i -- ok [23] $ cat i/x -- ok [26] $ rm -r i -- ok [31] $ setfacl -R -m u:bin:rwx h -- ok [32] $ getfacl --omit-header h/x -- ok [40] $ cp -rp h i -- ok [41] $ getfacl --omit-header i/x -- ok [49] $ cd .. -- ok [50] $ rm -r d -- ok 22 commands (22 passed, 0 failed) performing getfacl-noacl with bin='bin' daemon='daemon' users='users'... [4] $ mkdir test -- ok [5] $ cd test -- ok [6] $ umask 027 -- ok [7] $ touch x -- ok [8] $ getfacl --omit-header x -- ok [14] $ getfacl --omit-header --access x -- ok [20] $ getfacl --omit-header -d x -- ok [21] $ getfacl --omit-header -d . -- ok [22] $ getfacl --omit-header -d / -- ok [25] $ getfacl --skip-base x -- ok [26] $ getfacl --omit-header --all-effective x -- ok [32] $ getfacl --omit-header --no-effective x -- ok [38] $ mkdir d -- ok [39] $ touch d/y -- ok [46] $ getfacl -dRP . | grep file | sort -- ok [51] $ ln -s d l -- ok [53] $ ln -s l ll -- ok [62] $ rm l ll x -- ok [63] $ rm -rf d -- ok [64] $ cd .. -- ok [65] $ rmdir test -- ok 21 commands (21 passed, 0 failed) performing misc with bin='bin' daemon='daemon' users='users'... [6] $ umask 027 -- ok [7] $ touch f -- ok [10] $ setfacl -m u::r f -- ok [11] $ setfacl -m u::rw,u:bin:rw f -- ok [12] $ ls -dl f | awk '{print $1}' -- ok [15] $ getfacl --omit-header f -- ok [23] $ rm f -- ok [24] $ umask 022 -- ok [25] $ touch f -- ok [26] $ setfacl -m u:bin:rw f -- ok [27] $ ls -dl f | awk '{print $1}' -- ok [30] $ getfacl --omit-header f -- ok [38] $ rm f -- ok [39] $ umask 027 -- ok [40] $ mkdir d -- ok [41] $ setfacl -m u:bin:rwx d -- ok [42] $ ls -dl d | awk '{print $1}' -- ok [45] $ getfacl --omit-header d -- ok [53] $ rmdir d -- ok [54] $ umask 022 -- ok [55] $ mkdir d -- ok [56] $ setfacl -m u:bin:rwx d -- ok [57] $ ls -dl d | awk '{print $1}' -- ok [60] $ getfacl --omit-header d -- ok [68] $ rmdir d -- ok [73] $ umask 022 -- ok [74] $ touch f -- ok [75] $ setfacl -m u:bin:rw,u:daemon:r f -- ok [76] $ ls -dl f | awk '{print $1}' -- ok [79] $ getfacl --omit-header f -- ok [90] $ setfacl -m g:users:rw,g:daemon:r f -- ok [91] $ ls -dl f | awk '{print $1}' -- ok [94] $ getfacl --omit-header f -- ok [107] $ setfacl -x g:users f -- ok [108] $ ls -dl f | awk '{print $1}' -- ok [111] $ getfacl --omit-header f -- ok [123] $ setfacl -x u:daemon f -- ok [124] $ ls -dl f | awk '{print $1}' -- ok [127] $ getfacl --omit-header f -- ok [136] $ rm f -- ok [140] $ umask 027 -- ok [141] $ mkdir d -- ok [142] $ setfacl -m u:bin:rwx,u:daemon:rw,d:u:bin:rwx,d:m:rx d -- ok [143] $ ls -dl d | awk '{print $1}' -- ok [146] $ getfacl --omit-header d -- ok [162] $ umask 027 -- ok [163] $ touch d/f -- ok [164] $ ls -dl d/f | awk '{print $1}' -- ok [167] $ getfacl --omit-header d/f -- ok [175] $ rm d/f -- ok [176] $ umask 022 -- ok [177] $ touch d/f -- ok [178] $ ls -dl d/f | awk '{print $1}' -- ok [181] $ getfacl --omit-header d/f -- ok [189] $ rm d/f -- ok [193] $ umask 000 -- ok [194] $ mkdir d/d -- ok [195] $ ls -dl d/d | awk '{print $1}' -- ok [198] $ getfacl --omit-header d/d -- ok [211] $ rmdir d/d -- ok [212] $ umask 022 -- ok [213] $ mkdir d/d -- ok [214] $ ls -dl d/d | awk '{print $1}' -- ok [217] $ getfacl --omit-header d/d -- ok [232] $ setfacl -nm u:daemon:rx,d:u:daemon:rx,g:users:rx,g:daemon:rwx d/d -- ok [233] $ ls -dl d/d | awk '{print $1}' -- ok [236] $ getfacl --omit-header d/d -- ok [256] $ ln -s d d/l -- ok [257] $ ls -dl d/l | awk '{ sub(/\.$/, "", $1); print $1 }' -- ok [260] $ ls -dl -L d/l | awk '{print $1}' -- ok [265] $ cd d -- ok [266] $ getfacl --omit-header l -- ok [283] $ cd .. -- ok [285] $ rm d/l -- ok [289] $ setfacl -m g:daemon:rx,u:bin:rx d/d -- ok [290] $ ls -dl d/d | awk '{print $1}' -- ok [293] $ getfacl --omit-header d/d -- ok [310] $ setfacl -m d:u:bin:rwx d/d -- ok [311] $ ls -dl d/d | awk '{print $1}' -- ok [314] $ getfacl --omit-header d/d -- ok [331] $ rmdir d/d -- ok [335] $ setfacl -k d -- ok [336] $ ls -dl d | awk '{print $1}' -- ok [339] $ getfacl --omit-header d -- ok [350] $ setfacl -b d -- ok [351] $ ls -dl d | awk '{sub(/\./, "", $1); print $1}' -- ok [354] $ getfacl --omit-header d -- ok [362] $ chmod 775 d -- ok [363] $ ls -dl d | awk '{sub(/\./, "", $1); print $1}' -- ok [366] $ getfacl --omit-header d -- ok [372] $ rmdir d -- ok [373] $ umask 002 -- ok [374] $ mkdir d -- ok [375] $ setfacl -m u:daemon:rwx,u:bin:rx,d:u:daemon:rwx,d:u:bin:rx d -- ok [376] $ ls -dl d | awk '{print $1}' -- ok [379] $ getfacl --omit-header d -- ok [394] $ chmod 750 d -- ok [395] $ ls -dl d | awk '{print $1}' -- ok [398] $ getfacl --omit-header d -- ok [413] $ chmod 750 d -- ok [414] $ ls -dl d | awk '{print $1}' -- ok [417] $ getfacl --omit-header d -- ok [432] $ rmdir d -- ok 103 commands (103 passed, 0 failed) performing permissions with bin='bin' daemon='daemon' users='users'... [12] $ id -u -- ok [19] $ mkdir d -- ok [20] $ cd d -- ok [21] $ umask 027 -- ok [22] $ touch f -- ok [23] $ ls -l f | awk -- '{sub(/\./, "", $1); print $1, $3, $4 }' -- ok [30] $ echo root > f -- ok [32] $ su daemon -- ok [33] $ echo daemon >> f -- ok [36] $ su -- ok [42] $ chown bin:bin f -- ok [43] $ ls -l f | awk -- '{sub(/\./, "", $1); print $1, $3, $4 }' -- ok [45] $ su bin -- ok [46] $ echo bin >> f -- ok [52] $ su daemon -- ok [53] $ cat f -- ok [57] $ echo daemon >> f -- ok [64] $ su bin -- ok [65] $ setfacl -m u:daemon:rw f -- ok [66] $ getfacl --omit-header f -- ok [77] $ su daemon -- ok [78] $ echo daemon >> f -- ok [79] $ cat f -- ok [88] $ su bin -- ok [89] $ chmod g-w f -- ok [90] $ getfacl --omit-header f -- ok [98] $ su daemon -- ok [99] $ echo daemon >> f -- ok [108] $ su bin -- ok [109] $ setfacl -m u:daemon:r,g:daemon:rw-,o::rw- f -- ok [111] $ su daemon -- ok [112] $ echo daemon >> f -- ok [119] $ su bin -- ok [120] $ setfacl -x u:daemon f -- ok [122] $ su daemon -- ok [123] $ echo daemon2 >> f -- ok [124] $ cat f -- ok [134] $ su bin -- ok [135] $ setfacl -m g:daemon:r f -- ok [137] $ su daemon -- ok [138] $ echo daemon3 >> f -- ok [145] $ su bin -- ok [146] $ setfacl -x g:daemon f -- ok [148] $ su daemon -- ok [149] $ echo daemon4 >> f -- ok [156] $ su -- ok [157] $ chgrp root f -- ok [159] $ su daemon -- ok [160] $ echo daemon5 >> f -- ok [161] $ cat f -- ok [172] $ su -- ok [173] $ setfacl -m g:bin:r,g:daemon:w f -- ok [175] $ su daemon -- ok [176] $ : < f -- ok [177] $ : > f -- ok [178] $ : <> f -- ok [186] $ su -- ok [187] $ mkdir -m 750 e -- ok [188] $ touch e/h -- ok [190] $ su bin -- ok [191] $ shopt -s nullglob ; echo e/* -- ok [194] $ echo i > e/i -- ok [197] $ su -- ok [198] $ setfacl -m u:bin:rx e -- ok [200] $ su bin -- ok [201] $ echo e/* -- ok [208] $ touch e/i 2>&1 | sed -e "s/touch .*e\/i.*:/touch \'e\/i\':/" -- ok [211] $ su -- ok [212] $ setfacl -m u:bin:rwx e -- ok [214] $ su bin -- ok [215] $ echo i > e/i -- ok [220] $ su -- ok [221] $ touch g -- ok [222] $ ln -s g l -- ok [223] $ setfacl -m u:bin:rw l -- ok [224] $ ls -l g | awk -- '{ print $1, $3, $4 }' -- ok [234] $ mknod -m 0660 hdt b 91 64 -- ok [235] $ mknod -m 0660 null c 1 3 -- ok [236] $ mkfifo -m 0660 fifo -- ok [238] $ su bin -- ok [239] $ : < hdt -- ok [241] $ : < null -- ok [243] $ : < fifo -- ok [246] $ su -- ok [247] $ setfacl -m u:bin:rw hdt null fifo -- ok [249] $ su bin -- ok [250] $ : < hdt -- ok [252] $ : < null -- ok [253] $ ( echo blah > fifo & ) ; cat fifo -- ok [261] $ su -- ok [262] $ mkdir -m 600 x -- ok [263] $ chown daemon:daemon x -- ok [264] $ echo j > x/j -- ok [265] $ ls -l x/j | awk -- '{sub(/\./, "", $1); print $1, $3, $4 }' -- ok [268] $ setfacl -m u:daemon:r x -- ok [270] $ ls -l x/j | awk -- '{sub(/\./, "", $1); print $1, $3, $4 }' -- ok [274] $ echo k > x/k -- ok [277] $ chmod 750 x -- ok [282] $ su -- ok [283] $ cd .. -- ok [284] $ rm -rf d -- ok 101 commands (101 passed, 0 failed) 99 nobody:x:99: /usr/bin/setfattr performing permissions_xattr with bin='bin' daemon='daemon' users='users'... [11] $ id -u -- ok [19] $ mkdir d -- ok [20] $ cd d -- ok [21] $ umask 027 -- ok [22] $ touch f -- ok [23] $ chown nobody:nobody f -- ok [24] $ ls -l f | awk -- '{ sub(/\.$/, "", $1); print $1, $3, $4 }' -- ok [26] $ su nobody -- ok [27] $ echo nobody > f -- ok [33] $ su bin -- ok [34] $ setfattr -n user.test.xattr -v 123456 f -- ok [41] $ su nobody -- ok [42] $ setfacl -m g:bin:rw f -- ok [43] $ getfacl --omit-header f -- ok [55] $ su bin -- ok [56] $ setfattr -n user.test.xattr -v 123456 f -- ok [57] $ getfattr -d f -- ok [66] $ su -- ok [67] $ ln -s f l -- ok [68] $ ls -l l | awk -- '{ sub(/\.$/, "", $1); print $1, $3, $4 }' -- ok [70] $ su bin -- ok [71] $ getfattr -d l -- ok [81] $ su -- ok [82] $ mkdir t -- ok [83] $ chown nobody:nobody t -- ok [84] $ chmod 1750 t -- ok [85] $ ls -dl t | awk -- '{ sub(/\.$/, "", $1); print $1, $3, $4 }' -- ok [87] $ su nobody -- ok [88] $ setfacl -m g:bin:rwx t -- ok [89] $ getfacl --omit-header t -- ok [96] $ su bin -- ok [97] $ setfattr -n user.test.xattr -v 654321 t -- ok [105] $ su -- ok [106] $ mkdir d -- ok [107] $ chown nobody:nobody d -- ok [108] $ chmod 750 d -- ok [109] $ ls -dl d | awk -- '{ sub(/\.$/, "", $1); print $1, $3, $4 }' -- ok [111] $ su nobody -- ok [112] $ setfacl -m g:bin:rwx d -- ok [113] $ getfacl --omit-header d -- ok [120] $ su bin -- ok [121] $ setfattr -n user.test.xattr -v 654321 d -- ok [122] $ getfattr -d d -- ok [131] $ su -- ok [132] $ mknod -m 0660 hdt b 91 64 -- ok [133] $ mknod -m 0660 null c 1 3 -- ok [134] $ mkfifo -m 0660 fifo -- ok [135] $ setfattr -n user.test.xattr -v 123456 hdt -- ok [137] $ setfattr -n user.test.xattr -v 123456 null -- ok [139] $ setfattr -n user.test.xattr -v 123456 fifo -- ok [145] $ su -- ok [146] $ cd .. -- ok [147] $ rm -rf d -- ok 53 commands (53 passed, 0 failed) performing setfacl with bin='bin' daemon='daemon' users='users'... [3] $ mkdir d -- ok [4] $ chown bin:bin d -- ok [5] $ cd d -- ok [7] $ su bin -- ok [8] $ sg bin -- [(1,0)(1 1,1 1)]ok [9] $ umask 027 -- ok [10] $ touch g -- ok [11] $ ls -dl g | awk '{sub(/\./, "", $1); print $1}' -- ok [14] $ setfacl -m m:- g -- ok [15] $ ls -dl g | awk '{print $1}' -- ok [18] $ getfacl g -- ok [28] $ setfacl -x m g -- ok [29] $ getfacl g -- ok [38] $ setfacl -m u:daemon:rw g -- ok [39] $ getfacl g -- ok [50] $ setfacl -m u::rwx,g::r-x,o:- g -- ok [51] $ getfacl g -- ok [62] $ setfacl -m u::rwx,g::r-x,o:-,m:- g -- ok [63] $ getfacl g -- ok [74] $ setfacl -m u::rwx,g::r-x,o:-,u:root:-,m:- g -- ok [75] $ getfacl g -- ok [87] $ setfacl -m u::rwx,g::r-x,o:-,u:root:-,m:- g -- ok [88] $ getfacl g -- ok [100] $ setfacl -m u::rwx,g::r-x,o:-,u:root:- g -- ok [101] $ getfacl g -- ok [113] $ setfacl --test -x u: g -- ok [116] $ setfacl --test -x u:x -- ok [119] $ setfacl -m d:u:root:rwx g -- ok [122] $ setfacl -x m g -- ok [129] $ mkdir d -- ok [130] $ setfacl --test -m u::rwx,u:bin:rwx,g::r-x,o::--- d -- ok [133] $ setfacl --test -m u::rwx,u:bin:rwx,g::r-x,m::---,o::--- d -- ok [136] $ setfacl --test -d -m u::rwx,u:bin:rwx,g::r-x,o::--- d -- ok [139] $ setfacl --test -d -m u::rwx,u:bin:rwx,g::r-x,m::---,o::--- d -- ok [142] $ su -- ok [143] $ cd .. -- ok [144] $ rm -r d -- ok 37 commands (37 passed, 0 failed) performing inheritance with bin='bin' daemon='daemon' users='users'... [4] $ id -u -- ok [7] $ mkdir d -- ok [8] $ setfacl -d -m group:bin:r-x d -- ok [9] $ getfacl d -- ok [23] $ mkdir d/subdir -- ok [24] $ getfacl d/subdir -- ok [40] $ touch d/f -- ok [41] $ ls -l d/f | awk -- '{ print $1 }' -- ok [43] $ getfacl d/f -- ok [54] $ su bin -- ok [55] $ echo i >> d/f -- ok [62] $ su -- ok [63] $ rm d/f -- ok [64] $ rmdir d/subdir -- ok [65] $ mv d tree -- ok [66] $ ./make-tree -- ok [67] $ getfacl tree/dir0/dir5/file4 -- ok [77] $ getfacl tree/dir0/dir6/file4 -- ok [87] $ echo i >> tree/dir6/dir2/file2 -- ok [88] $ echo i > tree/dir1/f -- ok [89] $ ls -l tree/dir1/f | awk -- '{ print $1 }' -- ok [98] $ rm -rf tree -- ok 22 commands (22 passed, 0 failed) LU-974 ignore umask when acl is enabled... performing 974 with bin='bin' daemon='daemon' users='users'... [3] $ umask 022 -- ok [4] $ mkdir 974 -- ok [6] $ touch 974/f1 -- ok [7] $ ls -dl 974/f1 | awk '{sub(/\./, "", $1); print $1 }' -- ok [10] $ setfacl -R -d -m mask:007 974 -- ok [11] $ touch 974/f2 -- ok [12] $ ls -dl 974/f2 | awk '{ print $1 }' -- ok [15] $ umask 077 -- ok [16] $ touch f3 -- ok [17] $ ls -dl f3 | awk '{sub(/\./, "", $1); print $1 }' -- ok [20] $ rm -rf 974 -- ok 11 commands (11 passed, 0 failed) performing 974_remote with bin='bin' daemon='daemon' users='users'... [4] $ umask 022 -- ok [5] $ lfs mkdir -i 1 974 -- ok [7] $ touch 974/f1 -- ok [8] $ ls -dl 974/f1 | awk '{ sub(/\.$/, "", $1); print $1 }' -- ok [11] $ setfacl -R -d -m mask:007 974 -- ok [12] $ touch 974/f2 -- ok [13] $ ls -dl 974/f2 | awk '{ sub(/\.$/, "", $1); print $1 }' -- ok [16] $ umask 077 -- ok [17] $ touch f3 -- ok [18] $ ls -dl f3 | awk '{ sub(/\.$/, "", $1); print $1 }' -- ok [21] $ rm -rf 974 -- ok 11 commands (11 passed, 0 failed) LU-2561 newly created file is same size as directory... performing 2561 with bin='bin' daemon='daemon' users='users'... [3] $ mkdir -p 2561 -- ok [4] $ cd 2561 -- ok [5] $ getfacl --access . | setfacl -d -M- . -- ok [6] $ touch f1 -- ok [7] $ stat -c '%s' f1 -- ok [9] $ cd .. -- ok [10] $ rm -rf 2561 -- ok 7 commands (7 passed, 0 failed) performing 4924 with bin='bin' daemon='daemon' users='users'... [3] $ mkdir 4924 -- ok [4] $ cd 4924 -- ok [5] $ touch f -- ok [6] $ chmod u=rwx,g=rwxs f -- ok [7] $ ls -l f | awk -- '{sub(/\./, "", $1); print $1, $3, $4 }' -- ok [9] $ touch f -- ok [10] $ ls -l f | awk -- '{sub(/\./, "", $1); print $1, $3, $4 }' -- ok [12] $ cd .. -- ok [13] $ rm -rf 4924 -- ok 9 commands (9 passed, 0 failed) mdt.lustre-MDT0000.job_xattr=user.job mdt.lustre-MDT0001.job_xattr=user.job PASS 103a (79s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 103b: umask lfs setstripe ================= 21:24:00 (1713403440) PASS 103b (42s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 103c: 'cp -rp' won't set empty acl ======== 21:24:44 (1713403484) getfattr: Removing leading '/' from absolute path names getfattr: Removing leading '/' from absolute path names PASS 103c (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 103e: inheritance of big amount of default ACLs ========================================================== 21:24:49 (1713403489) mdc.lustre-MDT0000-mdc-ffff88012b776800.stats=clear mdc.lustre-MDT0001-mdc-ffff88012b776800.stats=clear debug=0 7000 default ACLs created File: '/mnt/lustre/d103e.sanity' Size: 4096 Blocks: 8 IO Block: 1048576 directory Device: 2c54f966h/743766374d Inode: 144115205423500046 Links: 2 Access: (0755/drwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2024-04-17 21:24:50.000000000 -0400 Modify: 2024-04-17 21:24:50.000000000 -0400 Change: 2024-04-17 21:28:15.000000000 -0400 Birth: - File: '/mnt/lustre/d103e.sanity/f103e.sanity' Size: 0 Blocks: 0 IO Block: 4194304 regular empty file Device: 2c54f966h/743766374d Inode: 144115205423500047 Links: 1 Access: (0664/-rw-rw-r--) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2024-04-17 21:28:16.000000000 -0400 Modify: 2024-04-17 21:28:16.000000000 -0400 Change: 2024-04-17 21:28:16.000000000 -0400 Birth: - 7000 ACLs were inherited setfacl: /mnt/lustre/d103e.sanity/f103e.sanity: Argument list too long Added 1187 more ACLs to the file Total 8188 ACLs in file mdc.lustre-MDT0000-mdc-ffff88012b776800.stats= snapshot_time 1713403797.396503415 secs.nsecs start_time 1713403490.896758478 secs.nsecs elapsed_time 306.499744937 secs.nsecs req_waittime 85218 samples [usecs] 588 107784 196504391 665126293951 req_active 93114 samples [reqs] 1 2 125631 190665 ldlm_ibits_enqueue 28604 samples [reqs] 1 1 28604 28604 mds_close 1 samples [usecs] 1608 1608 1608 2585664 mds_getxattr 20118 samples [usecs] 588 4847 31326601 53899613771 ldlm_cancel 28305 samples [usecs] 606 4253 45146519 80510066325 mdc.lustre-MDT0001-mdc-ffff88012b776800.stats= snapshot_time 1713403797.396609950 secs.nsecs start_time 1713403490.896856397 secs.nsecs elapsed_time 306.499753553 secs.nsecs req_waittime 61 samples [usecs] 899 2600 106472 197700630 req_active 61 samples [reqs] 1 1 61 61 obd_ping 61 samples [usecs] 899 2600 106472 197700630 debug=trace inode super iotrace malloc cache info ioctl neterror net warning buffs other dentry nettrace page dlmtrace error emerg ha rpctrace vfstrace reada mmap config console quota sec lfsck hsm snapshot layout PASS 103e (308s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 103f: changelog doesn't interfere with default ACLs buffers ========================================================== 21:29:59 (1713403799) mdd.lustre-MDT0000.changelog_mask=+hsm mdd.lustre-MDT0001.changelog_mask=+hsm Registered 2 changelog users: 'cl2 cl2' lustre-MDT0001: clear the changelog for cl2 of all records lustre-MDT0001: Deregistered changelog user #2 lustre-MDT0000: clear the changelog for cl2 of all records lustre-MDT0000: Deregistered changelog user #2 PASS 103f (6s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 104a: lfs df [-ih] [path] test =================================================================================== 21:30:07 (1713403807) UUID 1K-blocks Used Available Use% Mounted on lustre-MDT0000_UUID 1414116 5540 1282148 1% /mnt/lustre[MDT:0] lustre-MDT0001_UUID 1414116 4216 1283472 1% /mnt/lustre[MDT:1] lustre-OST0000_UUID 3833116 4244 3602776 1% /mnt/lustre[OST:0] lustre-OST0001_UUID 3833116 1608 3605412 1% /mnt/lustre[OST:1] filesystem_summary: 7666232 5852 7208188 1% /mnt/lustre UUID Inodes IUsed IFree IUse% Mounted on lustre-MDT0000_UUID 1000.0K 436 999.6K 1% /mnt/lustre[MDT:0] lustre-MDT0001_UUID 1000.0K 388 999.6K 1% /mnt/lustre[MDT:1] lustre-OST0000_UUID 256.0K 489 255.5K 1% /mnt/lustre[OST:0] lustre-OST0001_UUID 256.0K 972 255.1K 1% /mnt/lustre[OST:1] filesystem_summary: 511.4K 824 510.6K 1% /mnt/lustre UUID bytes Used Available Use% Mounted on lustre-MDT0000_UUID 1.3G 5.4M 1.2G 1% /mnt/lustre[MDT:0] lustre-MDT0001_UUID 1.3G 4.1M 1.2G 1% /mnt/lustre[MDT:1] lustre-OST0000_UUID 3.7G 4.1M 3.4G 1% /mnt/lustre[OST:0] lustre-OST0001_UUID 3.7G 1.6M 3.4G 1% /mnt/lustre[OST:1] filesystem_summary: 7.3G 5.7M 6.9G 1% /mnt/lustre UUID Inodes IUsed IFree IUse% Mounted on lustre-MDT0000_UUID 1024000 436 1023564 1% /mnt/lustre[MDT:0] lustre-MDT0001_UUID 1024000 388 1023612 1% /mnt/lustre[MDT:1] lustre-OST0000_UUID 262144 489 261655 1% /mnt/lustre[OST:0] lustre-OST0001_UUID 262144 972 261172 1% /mnt/lustre[OST:1] filesystem_summary: 523651 824 522827 1% /mnt/lustre UUID 1K-blocks Used Available Use% Mounted on lustre-MDT0000_UUID 1414116 5540 1282148 1% /mnt/lustre[MDT:0] lustre-MDT0001_UUID 1414116 4216 1283472 1% /mnt/lustre[MDT:1] lustre-OST0000_UUID 3833116 4244 3602776 1% /mnt/lustre[OST:0] lustre-OST0001_UUID 3833116 1608 3605412 1% /mnt/lustre[OST:1] filesystem_summary: 7666232 5852 7208188 1% /mnt/lustre UUID Inodes IUsed IFree IUse% Mounted on lustre-MDT0000_UUID 1000.0K 436 999.6K 1% /mnt/lustre[MDT:0] lustre-MDT0001_UUID 1000.0K 388 999.6K 1% /mnt/lustre[MDT:1] lustre-OST0000_UUID 256.0K 489 255.5K 1% /mnt/lustre[OST:0] lustre-OST0001_UUID 256.0K 972 255.1K 1% /mnt/lustre[OST:1] filesystem_summary: 511.4K 824 510.6K 1% /mnt/lustre UUID 1K-blocks Used Available Use% Mounted on lustre-MDT0000_UUID 1414116 5540 1282148 1% /mnt/lustre[MDT:0] lustre-MDT0001_UUID 1414116 4216 1283472 1% /mnt/lustre[MDT:1] lustre-OST0001_UUID 3833116 1608 3605412 1% /mnt/lustre[OST:1] filesystem_summary: 3833116 1608 3605412 1% /mnt/lustre oleg146-client.virtnet: executing wait_import_state (FULL|IDLE) osc.lustre-OST0000-osc-ffff88012b776800.ost_server_uuid 50 osc.lustre-OST0000-osc-ffff88012b776800.ost_server_uuid in FULL state after 0 sec UUID 1K-blocks Used Available Use% Mounted on lustre-MDT0000_UUID 1414116 5540 1282148 1% /mnt/lustre[MDT:0] lustre-MDT0001_UUID 1414116 4216 1283472 1% /mnt/lustre[MDT:1] lustre-OST0000_UUID 3833116 4244 3602776 1% /mnt/lustre[OST:0] lustre-OST0001_UUID 3833116 1608 3605412 1% /mnt/lustre[OST:1] filesystem_summary: 7666232 5852 7208188 1% /mnt/lustre PASS 104a (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 104b: runas -u 500 -g 500 lfs check servers test ============================================================================== 21:30:12 (1713403812) PASS 104b (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 104c: Verify df vs lfs_df stays same after recordsize change ========================================================== 21:30:15 (1713403815) SKIP: sanity test_104c zfs only test SKIP 104c (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 104d: runas -u 500 -g 500 lctl dl test ==== 21:30:18 (1713403818) running as uid/gid/euid/egid 500/500/500/500, groups: [/home/green/git/lustre-release/lustre/utils/lctl] [dl] running as uid/gid/euid/egid 500/500/500/500, groups: [/home/green/git/lustre-release/lustre/utils/lctl] [dl] running as uid/gid/euid/egid 500/500/500/500, groups: [/home/green/git/lustre-release/lustre/utils/lctl] [dl] PASS 104d (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 105a: flock when mounted without -o flock test ================================================================== 21:30:23 (1713403823) PASS 105a (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 105b: fcntl when mounted without -o flock test ================================================================== 21:30:27 (1713403827) PASS 105b (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 105c: lockf when mounted without -o flock test ========================================================== 21:30:31 (1713403831) PASS 105c (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 105d: flock race (should not freeze) ================================================================== 21:30:35 (1713403835) striped dir -i1 -c2 -H crush2 /mnt/lustre/d105d.sanity fail_loc=0x80000315 fcntl cmd 7 failed: Input/output error fcntl cmd 5 failed: Invalid argument thread 1: set write lock (blocking): rc = 0 thread 2: unlock: rc = 0 thread 2: unlock done: rc = 0 thread 2: set write lock (non-blocking): rc = 0 thread 2: set write lock done: rc = 0 thread 1: set write lock done: rc = 0 thread 1: unlock: rc = 0 thread 1: unlock done: rc = 0 PASS 105d (12s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 105e: Two conflicting flocks from same process ========================================================== 21:30:50 (1713403850) PASS 105e (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 105f: Enqueue same range flocks =========== 21:30:55 (1713403855) Time for processing 1.000s Time for processing 1.017s Time for processing 1.013s Time for processing 1.029s Time for processing 1.028s Time for processing 1.009s Time for processing 1.012s Time for processing 1.011s Time for processing 1.013s Time for processing 1.021s Time for processing 1.022s Time for processing 1.010s Time for processing 1.018s Time for processing 1.033s Time for processing 1.015s Time for processing 1.007s Time for processing 1.019s Time for processing 1.017s Time for processing 1.017s Time for processing 1.012s Time for processing 1.019s PASS 105f (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 106: attempt exec of dir followed by chown of that dir ========================================================== 21:31:01 (1713403861) striped dir -i0 -c2 -H all_char /mnt/lustre/d106.sanity /home/green/git/lustre-release/lustre/tests/sanity.sh: line 13092: /mnt/lustre/d106.sanity: Is a directory PASS 106 (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 107: Coredump on SIG ====================== 21:31:06 (1713403866) kernel.core_pattern = core kernel.core_uses_pid = 0 /home/green/git/lustre-release/lustre/tests/sanity.sh: line 13118: 13012 Segmentation fault (core dumped) sleep 60 kernel.core_pattern = core kernel.core_uses_pid = 1 PASS 107 (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 110: filename length checking ============= 21:31:12 (1713403872) striped dir -i0 -c2 -H crush2 /mnt/lustre/d110.sanity striped dir -i0 -c2 -H fnv_1a_64 /mnt/lustre/d110.sanity/aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa lfs mkdir: dirstripe error on '/mnt/lustre/d110.sanity/bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb': File name too long lfs setdirstripe: cannot create dir '/mnt/lustre/d110.sanity/bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb': File name too long touch: cannot touch '/mnt/lustre/d110.sanity/yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy': File name too long total 8 drwxr-xr-x 2 root root 8192 Apr 17 21:31 aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa -rw-r--r-- 1 root root 0 Apr 17 21:31 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx PASS 110 (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 116a: stripe QOS: free space balance ============================================================================= 21:31:18 (1713403878) Free space priority 90% Waiting for MDT destroys to complete OST kbytes available: 3602776 3605412 Min free space: OST 0: 3602776 Max free space: OST 1: 3605412 striped dir -i0 -c2 -H crush /mnt/lustre/d116a.sanity/OST0 Check for uneven OSTs: diff=2636KB (0%) must be > 17% ...no Fill 19% remaining space in OST0 with 684527KB ............................................................................................................................................................................................................................................................................................................................................... Waiting for MDT destroys to complete OST kbytes available: 2916672 3605412 Min free space: OST 0: 2916672 Max free space: OST 1: 3605412 diff=688740=23% must be > 17% for QOS mode...ok writing 600 files to QOS-assigned OSTs ........................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................wrote 600 200k files Note: free space may not be updated, so measurements might be off Waiting for MDT destroys to complete OST kbytes available: 2864944 3539212 Min free space: OST 0: 2864944 Max free space: OST 1: 3539212 free space delta: orig 688740 final 674268 Wrote 51728KB to smaller OST 0 Wrote 66200KB to larger OST 1 Wrote 27% more data to larger OST 1 lustre-OST0000_UUID 269 files created on smaller OST 0 lustre-OST0001_UUID 331 files created on larger OST 1 Wrote 23% more files to larger OST 1 Waiting for MDT destroys to complete cleanup time 19 PASS 116a (109s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 116b: QoS shouldn't LBUG if not enough OSTs found on the 2nd pass ========================================================== 21:33:09 (1713403989) lod.lustre-MDT0000-mdtlov.qos_threshold_rr=0 lov.lustre-MDT0000-mdtlov.qos_threshold_rr=0 fail_loc=0x147 total: 20 open/close in 0.21 seconds: 96.28 ops/second fail_loc=0 lod.lustre-MDT0000-mdtlov.qos_threshold_rr=17% lov.lustre-MDT0000-mdtlov.qos_threshold_rr=17% PASS 116b (5s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 117: verify osd extend ==================== 21:33:16 (1713403996) 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0571027 s, 18.4 MB/s fail_loc=0x21e fail_loc=0 Truncate succeeded. PASS 117 (3s) debug_raw_pointers=0 debug_raw_pointers=0 resend_count is set to 4 4 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 118a: verify O_SYNC works ================= 21:33:21 (1713404001) 2+0 records in 2+0 records out 131072 bytes (131 kB) copied, 0.0103325 s, 12.7 MB/s PASS 118a (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 118b: Reclaim dirty pages on fatal error ==================================================================== 21:33:26 (1713404006) 2+0 records in 2+0 records out 131072 bytes (131 kB) copied, 0.00764742 s, 17.1 MB/s fail_val=0 fail_loc=0x217 open: No such file or directory fail_val=0 fail_loc=0 Dirty pages not leaked on ENOENT PASS 118b (3s) debug_raw_pointers=0 debug_raw_pointers=0 SKIP: sanity test_118c skipping ALWAYS excluded test 118c resend_count is set to 4 4 SKIP: sanity test_118d skipping ALWAYS excluded test 118d debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 118f: Simulate unrecoverable OSC side error ==================================================================== 21:33:32 (1713404012) 2+0 records in 2+0 records out 131072 bytes (131 kB) copied, 0.00870967 s, 15.0 MB/s fail_loc=0x8000040a write: Input/output error fail_loc=0x0 No pages locked after fsync 2+0 records in 2+0 records out 131072 bytes (131 kB) copied, 0.0113131 s, 11.6 MB/s PASS 118f (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 118g: Don't stay in wait if we got local -ENOMEM ==================================================================== 21:33:37 (1713404017) 2+0 records in 2+0 records out 131072 bytes (131 kB) copied, 0.0115072 s, 11.4 MB/s fail_loc=0x406 write: Input/output error fail_loc=0 No pages locked after fsync 2+0 records in 2+0 records out 131072 bytes (131 kB) copied, 0.0116709 s, 11.2 MB/s PASS 118g (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 118h: Verify timeout in handling recoverables errors ==================================================================== 21:33:42 (1713404022) 2+0 records in 2+0 records out 131072 bytes (131 kB) copied, 0.0110182 s, 11.9 MB/s fail_val=0 fail_loc=0x20e write: Input/output error fail_val=0 fail_loc=0 No pages locked after fsync PASS 118h (14s) debug_raw_pointers=0 debug_raw_pointers=0 resend_count is set to 4 4 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 118i: Fix error before timeout in recoverable error ==================================================================== 21:33:58 (1713404038) 2+0 records in 2+0 records out 131072 bytes (131 kB) copied, 0.0202583 s, 6.5 MB/s fail_val=0 fail_loc=0x20e fail_val=0 fail_loc=0 No pages locked after fsync PASS 118i (9s) debug_raw_pointers=0 debug_raw_pointers=0 resend_count is set to 4 4 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 118j: Simulate unrecoverable OST side error ==================================================================== 21:34:10 (1713404050) 2+0 records in 2+0 records out 131072 bytes (131 kB) copied, 0.0107464 s, 12.2 MB/s fail_val=0 fail_loc=0x220 write: Bad address fail_val=0 fail_loc=0x0 No pages locked after fsync PASS 118j (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 118k: bio alloc -ENOMEM and IO TERM handling =================================================================== 21:34:16 (1713404056) fail_val=0 fail_loc=0x20e striped dir -i0 -c2 -H fnv_1a_64 /mnt/lustre/d118k.sanity /home/green/git/lustre-release/lustre/tests/sanity.sh: line 13673: 25304 Terminated ( dd if=/dev/zero of=$DIR/$tdir/$tfile-$i bs=1M count=10 || error "dd to $DIR/$tdir/$tfile-$i failed" ) 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 0.287761 s, 36.4 MB/s /home/green/git/lustre-release/lustre/tests/sanity.sh: line 13672: kill: (25307) - No such process 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 0.276342 s, 37.9 MB/s /home/green/git/lustre-release/lustre/tests/sanity.sh: line 13672: kill: (25310) - No such process /home/green/git/lustre-release/lustre/tests/sanity.sh: line 13673: 25314 Terminated ( dd if=/dev/zero of=$DIR/$tdir/$tfile-$i bs=1M count=10 || error "dd to $DIR/$tdir/$tfile-$i failed" ) /home/green/git/lustre-release/lustre/tests/sanity.sh: line 13673: 25317 Terminated ( dd if=/dev/zero of=$DIR/$tdir/$tfile-$i bs=1M count=10 || error "dd to $DIR/$tdir/$tfile-$i failed" ) /home/green/git/lustre-release/lustre/tests/sanity.sh: line 13673: 25321 Terminated ( dd if=/dev/zero of=$DIR/$tdir/$tfile-$i bs=1M count=10 || error "dd to $DIR/$tdir/$tfile-$i failed" ) /home/green/git/lustre-release/lustre/tests/sanity.sh: line 13673: 25324 Terminated ( dd if=/dev/zero of=$DIR/$tdir/$tfile-$i bs=1M count=10 || error "dd to $DIR/$tdir/$tfile-$i failed" ) /home/green/git/lustre-release/lustre/tests/sanity.sh: line 13673: 25327 Terminated ( dd if=/dev/zero of=$DIR/$tdir/$tfile-$i bs=1M count=10 || error "dd to $DIR/$tdir/$tfile-$i failed" ) /home/green/git/lustre-release/lustre/tests/sanity.sh: line 13673: 25331 Terminated ( dd if=/dev/zero of=$DIR/$tdir/$tfile-$i bs=1M count=10 || error "dd to $DIR/$tdir/$tfile-$i failed" ) /home/green/git/lustre-release/lustre/tests/sanity.sh: line 13673: 25334 Terminated ( dd if=/dev/zero of=$DIR/$tdir/$tfile-$i bs=1M count=10 || error "dd to $DIR/$tdir/$tfile-$i failed" ) fail_val=0 fail_loc=0 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 3.37271 s, 3.1 MB/s 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 5.96509 s, 1.8 MB/s 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 1.45659 s, 7.2 MB/s 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 3.06456 s, 3.4 MB/s 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 3.30442 s, 3.2 MB/s 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 3.82007 s, 2.7 MB/s 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 6.87376 s, 1.5 MB/s 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 6.55883 s, 1.6 MB/s PASS 118k (10s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 118l: fsync dir =========================== 21:34:28 (1713404068) striped dir -i0 -c2 -H fnv_1a_64 /mnt/lustre/d118l.sanity PASS 118l (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 118m: fdatasync dir ======================= 21:34:33 (1713404073) striped dir -i0 -c2 -H all_char /mnt/lustre/d118m.sanity PASS 118m (2s) debug_raw_pointers=0 debug_raw_pointers=0 resend_count is set to 4 4 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 118n: statfs() sends OST_STATFS requests in parallel ========================================================== 21:34:38 (1713404078) fail_val=0 fail_loc=0x242 fail_val=0 fail_loc=0 PASS 118n (5s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 119a: Short directIO read must return actual read amount ========================================================== 21:34:46 (1713404086) directio on /mnt/lustre/f119a.sanity for 1x524288 bytes PASS PASS 119a (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 119b: Sparse directIO read must return actual read amount ========================================================== 21:34:50 (1713404090) 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0489606 s, 21.4 MB/s PASS 119b (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 119c: Testing for direct read hitting hole ========================================================== 21:34:55 (1713404095) directio on /mnt/lustre/f119c.sanity for 1x1048576 bytes PASS directio on /mnt/lustre/f119c.sanity for 2x1048576 bytes PASS PASS 119c (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 119e: Basic tests of dio read and write at various sizes ========================================================== 21:35:00 (1713404100) 1+0 records in 1+0 records out 26214400 bytes (26 MB) copied, 0.870241 s, 30.1 MB/s 4+0 records in 4+0 records out 16380 bytes (16 kB) copied, 0.209752 s, 78.1 kB/s llite.lustre-ffff88012b776800.unaligned_dio=0 testing disabling unaligned DIO - 'invalid argument' expected: dd: error reading '/mnt/lustre/f119e.sanity.1': Invalid argument 0+0 records in 0+0 records out 0 bytes (0 B) copied, 0.00107119 s, 0.0 kB/s llite.lustre-ffff88012b776800.unaligned_dio=1 Read/write with DIO at size 1044480 25+1 records in 25+1 records out 26214400 bytes (26 MB) copied, 1.97129 s, 13.3 MB/s -rw-r--r-- 1 root root 26214400 Apr 17 21:35 /mnt/lustre/f119e.sanity.1 -rw-r--r-- 1 root root 26214400 Apr 17 21:35 /mnt/lustre/f119e.sanity.2 /mnt/lustre/f119e.sanity.2 has type file OK /mnt/lustre/f119e.sanity.2 has size 26214400 OK Read/write with DIO at size 1048576 25+0 records in 25+0 records out 26214400 bytes (26 MB) copied, 1.8122 s, 14.5 MB/s -rw-r--r-- 1 root root 26214400 Apr 17 21:35 /mnt/lustre/f119e.sanity.1 -rw-r--r-- 1 root root 26214400 Apr 17 21:35 /mnt/lustre/f119e.sanity.2 /mnt/lustre/f119e.sanity.2 has type file OK /mnt/lustre/f119e.sanity.2 has size 26214400 OK Read/write with DIO at size 1049600 24+1 records in 24+1 records out 26214400 bytes (26 MB) copied, 1.87756 s, 14.0 MB/s -rw-r--r-- 1 root root 26214400 Apr 17 21:35 /mnt/lustre/f119e.sanity.1 -rw-r--r-- 1 root root 26214400 Apr 17 21:35 /mnt/lustre/f119e.sanity.2 /mnt/lustre/f119e.sanity.2 has type file OK /mnt/lustre/f119e.sanity.2 has size 26214400 OK llite.lustre-ffff88012b776800.unaligned_dio=1 PASS 119e (13s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 119f: dio vs dio race ===================== 21:35:15 (1713404115) 1+0 records in 1+0 records out 26214400 bytes (26 MB) copied, 0.808557 s, 32.4 MB/s bs: 1044480 12+1 records in 12+1 records out 26214400 bytes (26 MB) copied, 1.5237 s, 17.2 MB/s 25+1 records in 25+1 records out 26214400 bytes (26 MB) copied, 2.06494 s, 12.7 MB/s /mnt/lustre/f119f.sanity.2 has type file OK /mnt/lustre/f119f.sanity.2 has size 26214400 OK bs: 1048576 12+1 records in 12+1 records out 26214400 bytes (26 MB) copied, 1.39878 s, 18.7 MB/s 25+0 records in 25+0 records out 26214400 bytes (26 MB) copied, 2.02235 s, 13.0 MB/s /mnt/lustre/f119f.sanity.2 has type file OK /mnt/lustre/f119f.sanity.2 has size 26214400 OK bs: 1049600 12+1 records in 12+1 records out 26214400 bytes (26 MB) copied, 1.32831 s, 19.7 MB/s 24+1 records in 24+1 records out 26214400 bytes (26 MB) copied, 1.89869 s, 13.8 MB/s /mnt/lustre/f119f.sanity.2 has type file OK /mnt/lustre/f119f.sanity.2 has size 26214400 OK PASS 119f (14s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 119g: dio vs buffered I/O race ============ 21:35:31 (1713404131) 1+0 records in 1+0 records out 26214400 bytes (26 MB) copied, 0.8228 s, 31.9 MB/s bs: 1044480 12+1 records in 12+1 records out 26214400 bytes (26 MB) copied, 1.78209 s, 14.7 MB/s 25+1 records in 25+1 records out 26214400 bytes (26 MB) copied, 2.77519 s, 9.4 MB/s /mnt/lustre/f119g.sanity.2 has type file OK /mnt/lustre/f119g.sanity.2 has size 26214400 OK bs: 1048576 12+1 records in 12+1 records out 26214400 bytes (26 MB) copied, 0.777747 s, 33.7 MB/s 25+0 records in 25+0 records out 26214400 bytes (26 MB) copied, 1.94414 s, 13.5 MB/s /mnt/lustre/f119g.sanity.2 has type file OK /mnt/lustre/f119g.sanity.2 has size 26214400 OK bs: 1049600 12+1 records in 12+1 records out 26214400 bytes (26 MB) copied, 0.818849 s, 32.0 MB/s 24+1 records in 24+1 records out 26214400 bytes (26 MB) copied, 1.94136 s, 13.5 MB/s /mnt/lustre/f119g.sanity.2 has type file OK /mnt/lustre/f119g.sanity.2 has size 26214400 OK PASS 119g (13s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 119h: basic tests of memory unaligned dio ========================================================== 21:35:46 (1713404146) unaligned writes of blocksize: 1044480 unaligned writes of blocksize: 1048576 unaligned writes of blocksize: 1049600 5+0 records in 5+0 records out 26214400 bytes (26 MB) copied, 0.656818 s, 39.9 MB/s unaligned reads of blocksize: 1044480 unaligned reads of blocksize: 1048576 unaligned reads of blocksize: 1049600 PASS 119h (10s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 119i: test unaligned aio at varying sizes ========================================================== 21:35:59 (1713404159) /home/green/git/lustre-release/lustre/tests/aiocp 1+0 records in 1+0 records out 26214400 bytes (26 MB) copied, 0.842452 s, 31.1 MB/s bs: 1044480, align: 8, file_size 26214400 /mnt/lustre/f119i.sanity.2 has type file OK /mnt/lustre/f119i.sanity.2 has size 26214400 OK bs: 1048576, align: 8, file_size 26214400 /mnt/lustre/f119i.sanity.2 has type file OK /mnt/lustre/f119i.sanity.2 has size 26214400 OK bs: 1049600, align: 8, file_size 26214400 /mnt/lustre/f119i.sanity.2 has type file OK /mnt/lustre/f119i.sanity.2 has size 26214400 OK bs: 1044480, align: 512, file_size 26214400 /mnt/lustre/f119i.sanity.2 has type file OK /mnt/lustre/f119i.sanity.2 has size 26214400 OK bs: 1048576, align: 512, file_size 26214400 /mnt/lustre/f119i.sanity.2 has type file OK /mnt/lustre/f119i.sanity.2 has size 26214400 OK bs: 1049600, align: 512, file_size 26214400 /mnt/lustre/f119i.sanity.2 has type file OK /mnt/lustre/f119i.sanity.2 has size 26214400 OK bs: 1044480, align: 4096, file_size 26214400 /mnt/lustre/f119i.sanity.2 has type file OK /mnt/lustre/f119i.sanity.2 has size 26214400 OK bs: 1048576, align: 4096, file_size 26214400 /mnt/lustre/f119i.sanity.2 has type file OK /mnt/lustre/f119i.sanity.2 has size 26214400 OK bs: 1049600, align: 4096, file_size 26214400 /mnt/lustre/f119i.sanity.2 has type file OK /mnt/lustre/f119i.sanity.2 has size 26214400 OK PASS 119i (15s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 120a: Early Lock Cancel: mkdir test ======= 21:36:16 (1713404176) striped dir -i0 -c1 -H fnv_1a_64 /mnt/lustre/d120a.sanity ldlm.namespaces.lustre-MDT0000-mdc-ffff88012b776800.lru_size=400 ldlm.namespaces.lustre-MDT0001-mdc-ffff88012b776800.lru_size=400 ldlm.namespaces.lustre-OST0000-osc-ffff88012b776800.lru_size=400 ldlm.namespaces.lustre-OST0001-osc-ffff88012b776800.lru_size=400 striped dir -i0 -c1 -H crush2 /mnt/lustre/d120a.sanity/d1 ldlm.namespaces.lustre-MDT0000-mdc-ffff88012b776800.lru_size=0 ldlm.namespaces.lustre-MDT0001-mdc-ffff88012b776800.lru_size=0 ldlm.namespaces.lustre-OST0000-osc-ffff88012b776800.lru_size=0 ldlm.namespaces.lustre-OST0001-osc-ffff88012b776800.lru_size=0 PASS 120a (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 120b: Early Lock Cancel: create test ====== 21:36:21 (1713404181) striped dir -i0 -c2 -H all_char /mnt/lustre/d120b.sanity ldlm.namespaces.lustre-MDT0000-mdc-ffff88012b776800.lru_size=400 ldlm.namespaces.lustre-MDT0001-mdc-ffff88012b776800.lru_size=400 ldlm.namespaces.lustre-OST0000-osc-ffff88012b776800.lru_size=400 ldlm.namespaces.lustre-OST0001-osc-ffff88012b776800.lru_size=400 ldlm.namespaces.lustre-MDT0000-mdc-ffff88012b776800.lru_size=0 ldlm.namespaces.lustre-MDT0001-mdc-ffff88012b776800.lru_size=0 ldlm.namespaces.lustre-OST0000-osc-ffff88012b776800.lru_size=0 ldlm.namespaces.lustre-OST0001-osc-ffff88012b776800.lru_size=0 PASS 120b (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 120c: Early Lock Cancel: link test ======== 21:36:27 (1713404187) striped dir -i0 -c1 -H fnv_1a_64 /mnt/lustre/d120c.sanity ldlm.namespaces.lustre-MDT0000-mdc-ffff88012b776800.lru_size=400 ldlm.namespaces.lustre-MDT0001-mdc-ffff88012b776800.lru_size=400 ldlm.namespaces.lustre-OST0000-osc-ffff88012b776800.lru_size=400 ldlm.namespaces.lustre-OST0001-osc-ffff88012b776800.lru_size=400 striped dir -i0 -c1 -H fnv_1a_64 /mnt/lustre/d120c.sanity/d1 striped dir -i0 -c1 -H crush2 /mnt/lustre/d120c.sanity/d2 ldlm.namespaces.lustre-MDT0000-mdc-ffff88012b776800.lru_size=0 ldlm.namespaces.lustre-MDT0001-mdc-ffff88012b776800.lru_size=0 ldlm.namespaces.lustre-OST0000-osc-ffff88012b776800.lru_size=0 ldlm.namespaces.lustre-OST0001-osc-ffff88012b776800.lru_size=0 PASS 120c (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 120d: Early Lock Cancel: setattr test ===== 21:36:31 (1713404191) striped dir -i0 -c1 -H crush /mnt/lustre/d120d.sanity ldlm.namespaces.lustre-MDT0000-mdc-ffff88012b776800.lru_size=400 ldlm.namespaces.lustre-MDT0001-mdc-ffff88012b776800.lru_size=400 ldlm.namespaces.lustre-OST0000-osc-ffff88012b776800.lru_size=400 ldlm.namespaces.lustre-OST0001-osc-ffff88012b776800.lru_size=400 ldlm.namespaces.lustre-MDT0000-mdc-ffff88012b776800.lru_size=0 ldlm.namespaces.lustre-MDT0001-mdc-ffff88012b776800.lru_size=0 ldlm.namespaces.lustre-OST0000-osc-ffff88012b776800.lru_size=0 ldlm.namespaces.lustre-OST0001-osc-ffff88012b776800.lru_size=0 PASS 120d (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 120e: Early Lock Cancel: unlink test ====== 21:36:36 (1713404196) striped dir -i0 -c1 -H crush2 /mnt/lustre/d120e.sanity ldlm.namespaces.lustre-MDT0000-mdc-ffff88012b776800.lru_size=400 ldlm.namespaces.lustre-MDT0001-mdc-ffff88012b776800.lru_size=400 ldlm.namespaces.lustre-OST0000-osc-ffff88012b776800.lru_size=400 ldlm.namespaces.lustre-OST0001-osc-ffff88012b776800.lru_size=400 1+0 records in 1+0 records out 512 bytes (512 B) copied, 0.00789115 s, 64.9 kB/s 1+0 records in 1+0 records out 512 bytes (512 B) copied, 0.0134993 s, 37.9 kB/s ldlm.namespaces.lustre-MDT0000-mdc-ffff88012b776800.lru_size=0 ldlm.namespaces.lustre-MDT0001-mdc-ffff88012b776800.lru_size=0 ldlm.namespaces.lustre-OST0000-osc-ffff88012b776800.lru_size=0 ldlm.namespaces.lustre-OST0001-osc-ffff88012b776800.lru_size=0 PASS 120e (10s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 120f: Early Lock Cancel: rename test ====== 21:36:48 (1713404208) striped dir -i0 -c1 -H all_char /mnt/lustre/d120f.sanity ldlm.namespaces.lustre-MDT0000-mdc-ffff88012b776800.lru_size=400 ldlm.namespaces.lustre-MDT0001-mdc-ffff88012b776800.lru_size=400 ldlm.namespaces.lustre-OST0000-osc-ffff88012b776800.lru_size=400 ldlm.namespaces.lustre-OST0001-osc-ffff88012b776800.lru_size=400 striped dir -i0 -c1 -H all_char /mnt/lustre/d120f.sanity/d1 striped dir -i0 -c1 -H crush /mnt/lustre/d120f.sanity/d2 1+0 records in 1+0 records out 512 bytes (512 B) copied, 0.00771683 s, 66.3 kB/s 1+0 records in 1+0 records out 512 bytes (512 B) copied, 0.00774493 s, 66.1 kB/s 1+0 records in 1+0 records out 512 bytes (512 B) copied, 0.0122643 s, 41.7 kB/s 1+0 records in 1+0 records out 512 bytes (512 B) copied, 0.0122101 s, 41.9 kB/s ldlm.namespaces.lustre-MDT0000-mdc-ffff88012b776800.lru_size=0 ldlm.namespaces.lustre-MDT0001-mdc-ffff88012b776800.lru_size=0 ldlm.namespaces.lustre-OST0000-osc-ffff88012b776800.lru_size=0 ldlm.namespaces.lustre-OST0001-osc-ffff88012b776800.lru_size=0 PASS 120f (11s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 120g: Early Lock Cancel: performance test ========================================================== 21:37:01 (1713404221) ldlm.namespaces.lustre-MDT0000-mdc-ffff88012b776800.lru_size=400 ldlm.namespaces.lustre-MDT0001-mdc-ffff88012b776800.lru_size=400 ldlm.namespaces.lustre-OST0000-osc-ffff88012b776800.lru_size=400 ldlm.namespaces.lustre-OST0001-osc-ffff88012b776800.lru_size=400 create 10000 files striped dir -i0 -c2 -H crush2 /mnt/lustre/d120g.sanity - open/close 2412 (time 1713404233.21 total 10.00 last 241.14) - open/close 4886 (time 1713404243.21 total 20.01 last 247.33) - open/close 7373 (time 1713404253.21 total 30.01 last 248.69) total: 10000 open/close in 39.79 seconds: 251.32 ops/second total: 1 cancels, 0 blockings rm 10000 files total: 10000 removes in 64 total: 2 cancels, 0 blockings ldlm.namespaces.lustre-MDT0000-mdc-ffff88012b776800.lru_size=0 ldlm.namespaces.lustre-MDT0001-mdc-ffff88012b776800.lru_size=0 ldlm.namespaces.lustre-OST0000-osc-ffff88012b776800.lru_size=0 ldlm.namespaces.lustre-OST0001-osc-ffff88012b776800.lru_size=0 PASS 120g (110s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 121: read cancel race ===================== 21:38:53 (1713404333) fail_loc=0x310 fail_loc=0 PASS 121 (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 123aa: verify statahead work ============== 21:38:56 (1713404336) seq.cli-lustre-OST0000-super.width=0x1ffffff seq.cli-lustre-OST0001-super.width=0x1ffffff kvm mdc.lustre-MDT0000-mdc-ffff88012b776800.batch_stats=0 mdc.lustre-MDT0001-mdc-ffff88012b776800.batch_stats=0 striped dir -i1 -c2 -H crush /mnt/lustre/d123aa.sanity total: 100 open/close in 0.87 seconds: 114.71 ops/second llite.lustre-ffff88012b776800.statahead_max=0 101 real 0m0.758s user 0m0.008s sys 0m0.257s ls -l 100 files without statahead: 1 sec llite.lustre-ffff88012b776800.statahead_max=128 128 101 real 0m0.303s user 0m0.003s sys 0m0.166s ls -l 100 files with statahead: 0 sec statahead total: 27 statahead wrong: 0 agl total: 27 list_total: 27 fname_total: 0 hit_total: 348 miss_total: 139 total: 900 open/close in 2.93 seconds: 307.40 ops/second llite.lustre-ffff88012b776800.statahead_max=0 1001 real 0m8.500s user 0m0.016s sys 0m3.057s ls -l 1000 files without statahead: 9 sec llite.lustre-ffff88012b776800.statahead_max=128 128 1001 real 0m0.991s user 0m0.014s sys 0m0.811s ls -l 1000 files with statahead: 1 sec statahead total: 28 statahead wrong: 0 agl total: 28 list_total: 28 fname_total: 0 hit_total: 1347 miss_total: 140 ls -l done rm -r /mnt/lustre/d123aa.sanity/: 5 seconds rm done statahead total: 28 statahead wrong: 0 agl total: 28 list_total: 28 fname_total: 0 hit_total: 1347 miss_total: 140 mdc.lustre-MDT0000-mdc-ffff88012b776800.batch_stats= snapshot_time: 1713404363.367049680 secs.nsecs start_time: 1713404337.940876325 secs.nsecs elapsed_time: 25.426173355 secs.nsecs subreqs per batch batches % cum % 1: 3 15 15 2: 0 0 15 4: 3 15 30 8: 0 0 30 16: 1 5 35 32: 10 50 85 64: 3 15 100 mdc.lustre-MDT0001-mdc-ffff88012b776800.batch_stats= snapshot_time: 1713404363.367231307 secs.nsecs start_time: 1713404337.941039962 secs.nsecs elapsed_time: 25.426191345 secs.nsecs subreqs per batch batches % cum % 1: 3 12 12 2: 1 4 16 4: 1 4 20 8: 4 16 36 16: 2 8 44 32: 0 0 44 64: 14 56 100 seq.cli-lustre-OST0000-super.width=65536 seq.cli-lustre-OST0001-super.width=65536 PASS 123aa (29s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 123ab: verify statahead work by using statx ========================================================== 21:39:27 (1713404367) SKIP: sanity test_123ab Test must be statx() syscall supported SKIP 123ab (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 123ac: verify statahead work by using statx without glimpse RPCs ========================================================== 21:39:31 (1713404371) SKIP: sanity test_123ac Test must be statx() syscall supported SKIP 123ac (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 123ad: Verify batching statahead works correctly ========================================================== 21:39:33 (1713404373) batching: statahead_max=32 statahead_batch_max=32 mdc.lustre-MDT0000-mdc-ffff88012b776800.batch_stats=0 mdc.lustre-MDT0001-mdc-ffff88012b776800.batch_stats=0 llite.lustre-ffff88012b776800.statahead_max=32 llite.lustre-ffff88012b776800.statahead_batch_max=32 seq.cli-lustre-OST0000-super.width=0x1ffffff seq.cli-lustre-OST0001-super.width=0x1ffffff kvm mdc.lustre-MDT0000-mdc-ffff88012b776800.batch_stats=0 mdc.lustre-MDT0001-mdc-ffff88012b776800.batch_stats=0 striped dir -i1 -c2 -H crush2 /mnt/lustre/d123ad.sanity total: 100 open/close in 0.92 seconds: 109.14 ops/second llite.lustre-ffff88012b776800.statahead_max=0 101 real 0m0.705s user 0m0.003s sys 0m0.257s ls -l 100 files without statahead: 1 sec llite.lustre-ffff88012b776800.statahead_max=32 32 101 real 0m0.293s user 0m0.004s sys 0m0.147s ls -l 100 files with statahead: 1 sec statahead total: 29 statahead wrong: 0 agl total: 29 list_total: 29 fname_total: 0 hit_total: 1446 miss_total: 141 total: 900 open/close in 3.62 seconds: 248.34 ops/second llite.lustre-ffff88012b776800.statahead_max=0 1001 real 0m8.581s user 0m0.040s sys 0m3.042s ls -l 1000 files without statahead: 8 sec llite.lustre-ffff88012b776800.statahead_max=32 32 1001 real 0m2.450s user 0m0.029s sys 0m1.273s ls -l 1000 files with statahead: 2 sec statahead total: 30 statahead wrong: 0 agl total: 30 list_total: 30 fname_total: 0 hit_total: 2445 miss_total: 142 ls -l done rm -r /mnt/lustre/d123ad.sanity/: 5 seconds rm done statahead total: 30 statahead wrong: 0 agl total: 30 list_total: 30 fname_total: 0 hit_total: 2445 miss_total: 142 mdc.lustre-MDT0000-mdc-ffff88012b776800.batch_stats= snapshot_time: 1713404402.991898734 secs.nsecs start_time: 1713404375.501295318 secs.nsecs elapsed_time: 27.490603416 secs.nsecs subreqs per batch batches % cum % 1: 3 7 7 2: 2 5 13 4: 0 0 13 8: 3 7 21 16: 24 63 84 32: 6 15 100 mdc.lustre-MDT0001-mdc-ffff88012b776800.batch_stats= snapshot_time: 1713404402.992120045 secs.nsecs start_time: 1713404375.501438861 secs.nsecs elapsed_time: 27.490681184 secs.nsecs subreqs per batch batches % cum % 1: 2 5 5 2: 1 2 7 4: 1 2 10 8: 2 5 15 16: 11 27 42 32: 23 57 100 - open/close 2481 (time 1713404413.92 total 10.00 last 248.07) - open/close 4930 (time 1713404423.92 total 20.00 last 244.89) - open/close 7799 (time 1713404433.92 total 30.00 last 286.84) total: 10000 open/close in 38.70 seconds: 258.42 ops/second llite.lustre-ffff88012b776800.statahead_batch_max=0 llite.lustre-ffff88012b776800.statahead_stats=clear mdc.lustre-MDT0000-mdc-ffff88012b776800.stats=clear mdc.lustre-MDT0001-mdc-ffff88012b776800.stats=clear 10001 real 0m8.782s user 0m0.189s sys 0m8.097s llite.lustre-ffff88012b776800.statahead_batch_max=32 llite.lustre-ffff88012b776800.statahead_stats=clear mdc.lustre-MDT0000-mdc-ffff88012b776800.batch_stats=clear mdc.lustre-MDT0001-mdc-ffff88012b776800.batch_stats=clear mdc.lustre-MDT0000-mdc-ffff88012b776800.stats=clear mdc.lustre-MDT0001-mdc-ffff88012b776800.stats=clear 10001 real 0m29.627s user 0m0.286s sys 0m13.600s unbatched RPCs: 10004, batched RPCs: 315 mdc.lustre-MDT0000-mdc-ffff88012b776800.batch_stats= snapshot_time: 1713404485.492265656 secs.nsecs start_time: 1713404453.499694779 secs.nsecs elapsed_time: 31.992570877 secs.nsecs subreqs per batch batches % cum % 1: 0 0 0 2: 0 0 0 4: 0 0 0 8: 1 0 0 16: 2 0 0 32: 312 99 100 mdc.lustre-MDT0001-mdc-ffff88012b776800.batch_stats= snapshot_time: 1713404485.492442883 secs.nsecs start_time: 1713404453.499778382 secs.nsecs elapsed_time: 31.992664501 secs.nsecs subreqs per batch batches % cum % 1: 0 0 0 batching: statahead_max=2048 statahead_batch_max=256 mdc.lustre-MDT0000-mdc-ffff88012b776800.batch_stats=0 mdc.lustre-MDT0001-mdc-ffff88012b776800.batch_stats=0 llite.lustre-ffff88012b776800.statahead_max=2048 llite.lustre-ffff88012b776800.statahead_batch_max=256 seq.cli-lustre-OST0000-super.width=0x1ffffff seq.cli-lustre-OST0001-super.width=0x1ffffff kvm mdc.lustre-MDT0000-mdc-ffff88012b776800.batch_stats=0 mdc.lustre-MDT0001-mdc-ffff88012b776800.batch_stats=0 striped dir -i1 -c2 -H crush /mnt/lustre/d123ad.sanity total: 100 open/close in 0.94 seconds: 106.61 ops/second llite.lustre-ffff88012b776800.statahead_max=0 101 real 0m0.912s user 0m0.006s sys 0m0.319s ls -l 100 files without statahead: 1 sec llite.lustre-ffff88012b776800.statahead_max=2048 2048 101 real 0m0.334s user 0m0.002s sys 0m0.148s ls -l 100 files with statahead: 1 sec statahead total: 2 statahead wrong: 0 agl total: 2 list_total: 2 fname_total: 0 hit_total: 10098 miss_total: 2 total: 900 open/close in 3.54 seconds: 254.10 ops/second llite.lustre-ffff88012b776800.statahead_max=0 1001 real 0m9.001s user 0m0.044s sys 0m3.226s ls -l 1000 files without statahead: 9 sec llite.lustre-ffff88012b776800.statahead_max=2048 2048 1001 real 0m1.251s user 0m0.020s sys 0m0.843s ls -l 1000 files with statahead: 1 sec statahead total: 3 statahead wrong: 0 agl total: 3 list_total: 3 fname_total: 0 hit_total: 11097 miss_total: 3 ls -l done rm -r /mnt/lustre/d123ad.sanity/: 5 seconds rm done statahead total: 3 statahead wrong: 0 agl total: 3 list_total: 3 fname_total: 0 hit_total: 11097 miss_total: 3 mdc.lustre-MDT0000-mdc-ffff88012b776800.batch_stats= snapshot_time: 1713404561.492608248 secs.nsecs start_time: 1713404486.492283145 secs.nsecs elapsed_time: 75.000325103 secs.nsecs subreqs per batch batches % cum % 1: 3 37 37 2: 1 12 50 4: 0 0 50 8: 2 25 75 16: 0 0 75 32: 0 0 75 64: 0 0 75 128: 0 0 75 256: 2 25 100 mdc.lustre-MDT0001-mdc-ffff88012b776800.batch_stats= snapshot_time: 1713404561.492716708 secs.nsecs start_time: 1713404486.492460591 secs.nsecs elapsed_time: 75.000256117 secs.nsecs subreqs per batch batches % cum % 1: 0 0 0 2: 1 11 11 4: 1 11 22 8: 2 22 44 16: 1 11 55 32: 0 0 55 64: 2 22 77 128: 0 0 77 256: 2 22 100 - open/close 2895 (time 1713404572.34 total 10.00 last 289.37) - open/close 5711 (time 1713404582.35 total 20.01 last 281.52) - open/close 8710 (time 1713404592.35 total 30.01 last 299.83) total: 10000 open/close in 35.22 seconds: 283.97 ops/second llite.lustre-ffff88012b776800.statahead_batch_max=0 llite.lustre-ffff88012b776800.statahead_stats=clear mdc.lustre-MDT0000-mdc-ffff88012b776800.stats=clear mdc.lustre-MDT0001-mdc-ffff88012b776800.stats=clear 10001 real 0m9.539s user 0m0.194s sys 0m8.547s llite.lustre-ffff88012b776800.statahead_batch_max=256 llite.lustre-ffff88012b776800.statahead_stats=clear mdc.lustre-MDT0000-mdc-ffff88012b776800.batch_stats=clear mdc.lustre-MDT0001-mdc-ffff88012b776800.batch_stats=clear mdc.lustre-MDT0000-mdc-ffff88012b776800.stats=clear mdc.lustre-MDT0001-mdc-ffff88012b776800.stats=clear 10001 real 0m8.889s user 0m0.181s sys 0m7.604s unbatched RPCs: 10004, batched RPCs: 278 mdc.lustre-MDT0000-mdc-ffff88012b776800.batch_stats= snapshot_time: 1713404620.465824700 secs.nsecs start_time: 1713404609.154754457 secs.nsecs elapsed_time: 11.311070243 secs.nsecs subreqs per batch batches % cum % 1: 178 64 64 2: 15 5 69 4: 7 2 71 8: 8 2 74 16: 16 5 80 32: 15 5 85 64: 3 1 87 128: 1 0 87 256: 35 12 100 mdc.lustre-MDT0001-mdc-ffff88012b776800.batch_stats= snapshot_time: 1713404620.466017540 secs.nsecs start_time: 1713404609.154845357 secs.nsecs elapsed_time: 11.311172183 secs.nsecs subreqs per batch batches % cum % 1: 0 0 0 seq.cli-lustre-OST0000-super.width=33554431 seq.cli-lustre-OST0001-super.width=33554431 seq.cli-lustre-OST0000-super.width=65536 seq.cli-lustre-OST0001-super.width=65536 llite.lustre-ffff88012b776800.statahead_batch_max=64 llite.lustre-ffff88012b776800.statahead_max=128 PASS 123ad (299s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 123b: not panic with network error in statahead enqueue (bug 15027) ========================================================== 21:44:34 (1713404674) striped dir -i1 -c2 -H all_char /mnt/lustre/d123b.sanity total: 1000 open/close in 2.85 seconds: 350.68 ops/second fail_loc=0x80000803 ls done fail_loc=0x0 statahead total: 2 statahead wrong: 0 agl total: 2 list_total: 2 fname_total: 0 hit_total: 10998 miss_total: 2 PASS 123b (10s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 123c: Can not initialize inode warning on DNE statahead ========================================================== 21:44:45 (1713404685) striped dir -i0 -c1 -H crush /mnt/lustre/d123c.sanity.0 striped dir -i1 -c1 -H fnv_1a_64 /mnt/lustre/d123c.sanity.1 Stopping client oleg146-client.virtnet /mnt/lustre (opts:) Starting client: oleg146-client.virtnet: -o user_xattr,flock oleg146-server@tcp:/lustre /mnt/lustre PASS 123c (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 123d: Statahead on striped directories works correctly ========================================================== 21:44:51 (1713404691) striped dir -i1 -c2 -H fnv_1a_64 /mnt/lustre/d123d.sanity total: 100 mkdir in 0.85 seconds: 117.53 ops/second Stopping client oleg146-client.virtnet /mnt/lustre (opts:) Starting client: oleg146-client.virtnet: -o user_xattr,flock oleg146-server@tcp:/lustre /mnt/lustre llite.lustre-ffff8800a9693000.statahead_max=128 llite.lustre-ffff8800a9693000.statahead_stats=0 total 800 drwxr-xr-x 2 root root 8192 Apr 17 21:44 f123d.sanity0 drwxr-xr-x 2 root root 8192 Apr 17 21:44 f123d.sanity1 drwxr-xr-x 2 root root 8192 Apr 17 21:44 f123d.sanity10 drwxr-xr-x 2 root root 8192 Apr 17 21:44 f123d.sanity11 drwxr-xr-x 2 root root 8192 Apr 17 21:44 f123d.sanity12 drwxr-xr-x 2 root root 8192 Apr 17 21:44 f123d.sanity13 drwxr-xr-x 2 root root 8192 Apr 17 21:44 f123d.sanity14 drwxr-xr-x 2 root root 8192 Apr 17 21:44 f123d.sanity15 drwxr-xr-x 2 root root 8192 Apr 17 21:44 f123d.sanity16 drwxr-xr-x 2 root root 8192 Apr 17 21:44 f123d.sanity17 drwxr-xr-x 2 root root 8192 Apr 17 21:44 f123d.sanity18 drwxr-xr-x 2 root root 8192 Apr 17 21:44 f123d.sanity19 drwxr-xr-x 2 root root 8192 Apr 17 21:44 f123d.sanity2 drwxr-xr-x 2 root root 8192 Apr 17 21:44 f123d.sanity20 drwxr-xr-x 2 root root 8192 Apr 17 21:44 f123d.sanity21 drwxr-xr-x 2 root root 8192 Apr 17 21:44 f123d.sanity22 drwxr-xr-x 2 root root 8192 Apr 17 21:44 f123d.sanity23 drwxr-xr-x 2 root root 8192 Apr 17 21:44 f123d.sanity24 drwxr-xr-x 2 root root 8192 Apr 17 21:44 f123d.sanity25 drwxr-xr-x 2 root root 8192 Apr 17 21:44 f123d.sanity26 drwxr-xr-x 2 root root 8192 Apr 17 21:44 f123d.sanity27 drwxr-xr-x 2 root root 8192 Apr 17 21:44 f123d.sanity28 drwxr-xr-x 2 root root 8192 Apr 17 21:44 f123d.sanity29 drwxr-xr-x 2 root root 8192 Apr 17 21:44 f123d.sanity3 drwxr-xr-x 2 root root 8192 Apr 17 21:44 f123d.sanity30 drwxr-xr-x 2 root root 8192 Apr 17 21:44 f123d.sanity31 drwxr-xr-x 2 root root 8192 Apr 17 21:44 f123d.sanity32 drwxr-xr-x 2 root root 8192 Apr 17 21:44 f123d.sanity33 drwxr-xr-x 2 root root 8192 Apr 17 21:44 f123d.sanity34 drwxr-xr-x 2 root root 8192 Apr 17 21:44 f123d.sanity35 drwxr-xr-x 2 root root 8192 Apr 17 21:44 f123d.sanity36 drwxr-xr-x 2 root root 8192 Apr 17 21:44 f123d.sanity37 drwxr-xr-x 2 root root 8192 Apr 17 21:44 f123d.sanity38 drwxr-xr-x 2 root root 8192 Apr 17 21:44 f123d.sanity39 drwxr-xr-x 2 root root 8192 Apr 17 21:44 f123d.sanity4 drwxr-xr-x 2 root root 8192 Apr 17 21:44 f123d.sanity40 drwxr-xr-x 2 root root 8192 Apr 17 21:44 f123d.sanity41 drwxr-xr-x 2 root root 8192 Apr 17 21:44 f123d.sanity42 drwxr-xr-x 2 root root 8192 Apr 17 21:44 f123d.sanity43 drwxr-xr-x 2 root root 8192 Apr 17 21:44 f123d.sanity44 drwxr-xr-x 2 root root 8192 Apr 17 21:44 f123d.sanity45 drwxr-xr-x 2 root root 8192 Apr 17 21:44 f123d.sanity46 drwxr-xr-x 2 root root 8192 Apr 17 21:44 f123d.sanity47 drwxr-xr-x 2 root root 8192 Apr 17 21:44 f123d.sanity48 drwxr-xr-x 2 root root 8192 Apr 17 21:44 f123d.sanity49 drwxr-xr-x 2 root root 8192 Apr 17 21:44 f123d.sanity5 drwxr-xr-x 2 root root 8192 Apr 17 21:44 f123d.sanity50 drwxr-xr-x 2 root root 8192 Apr 17 21:44 f123d.sanity51 drwxr-xr-x 2 root root 8192 Apr 17 21:44 f123d.sanity52 drwxr-xr-x 2 root root 8192 Apr 17 21:44 f123d.sanity53 drwxr-xr-x 2 root root 8192 Apr 17 21:44 f123d.sanity54 drwxr-xr-x 2 root root 8192 Apr 17 21:44 f123d.sanity55 drwxr-xr-x 2 root root 8192 Apr 17 21:44 f123d.sanity56 drwxr-xr-x 2 root root 8192 Apr 17 21:44 f123d.sanity57 drwxr-xr-x 2 root root 8192 Apr 17 21:44 f123d.sanity58 drwxr-xr-x 2 root root 8192 Apr 17 21:44 f123d.sanity59 drwxr-xr-x 2 root root 8192 Apr 17 21:44 f123d.sanity6 drwxr-xr-x 2 root root 8192 Apr 17 21:44 f123d.sanity60 drwxr-xr-x 2 root root 8192 Apr 17 21:44 f123d.sanity61 drwxr-xr-x 2 root root 8192 Apr 17 21:44 f123d.sanity62 drwxr-xr-x 2 root root 8192 Apr 17 21:44 f123d.sanity63 drwxr-xr-x 2 root root 8192 Apr 17 21:44 f123d.sanity64 drwxr-xr-x 2 root root 8192 Apr 17 21:44 f123d.sanity65 drwxr-xr-x 2 root root 8192 Apr 17 21:44 f123d.sanity66 drwxr-xr-x 2 root root 8192 Apr 17 21:44 f123d.sanity67 drwxr-xr-x 2 root root 8192 Apr 17 21:44 f123d.sanity68 drwxr-xr-x 2 root root 8192 Apr 17 21:44 f123d.sanity69 drwxr-xr-x 2 root root 8192 Apr 17 21:44 f123d.sanity7 drwxr-xr-x 2 root root 8192 Apr 17 21:44 f123d.sanity70 drwxr-xr-x 2 root root 8192 Apr 17 21:44 f123d.sanity71 drwxr-xr-x 2 root root 8192 Apr 17 21:44 f123d.sanity72 drwxr-xr-x 2 root root 8192 Apr 17 21:44 f123d.sanity73 drwxr-xr-x 2 root root 8192 Apr 17 21:44 f123d.sanity74 drwxr-xr-x 2 root root 8192 Apr 17 21:44 f123d.sanity75 drwxr-xr-x 2 root root 8192 Apr 17 21:44 f123d.sanity76 drwxr-xr-x 2 root root 8192 Apr 17 21:44 f123d.sanity77 drwxr-xr-x 2 root root 8192 Apr 17 21:44 f123d.sanity78 drwxr-xr-x 2 root root 8192 Apr 17 21:44 f123d.sanity79 drwxr-xr-x 2 root root 8192 Apr 17 21:44 f123d.sanity8 drwxr-xr-x 2 root root 8192 Apr 17 21:44 f123d.sanity80 drwxr-xr-x 2 root root 8192 Apr 17 21:44 f123d.sanity81 drwxr-xr-x 2 root root 8192 Apr 17 21:44 f123d.sanity82 drwxr-xr-x 2 root root 8192 Apr 17 21:44 f123d.sanity83 drwxr-xr-x 2 root root 8192 Apr 17 21:44 f123d.sanity84 drwxr-xr-x 2 root root 8192 Apr 17 21:44 f123d.sanity85 drwxr-xr-x 2 root root 8192 Apr 17 21:44 f123d.sanity86 drwxr-xr-x 2 root root 8192 Apr 17 21:44 f123d.sanity87 drwxr-xr-x 2 root root 8192 Apr 17 21:44 f123d.sanity88 drwxr-xr-x 2 root root 8192 Apr 17 21:44 f123d.sanity89 drwxr-xr-x 2 root root 8192 Apr 17 21:44 f123d.sanity9 drwxr-xr-x 2 root root 8192 Apr 17 21:44 f123d.sanity90 drwxr-xr-x 2 root root 8192 Apr 17 21:44 f123d.sanity91 drwxr-xr-x 2 root root 8192 Apr 17 21:44 f123d.sanity92 drwxr-xr-x 2 root root 8192 Apr 17 21:44 f123d.sanity93 drwxr-xr-x 2 root root 8192 Apr 17 21:44 f123d.sanity94 drwxr-xr-x 2 root root 8192 Apr 17 21:44 f123d.sanity95 drwxr-xr-x 2 root root 8192 Apr 17 21:44 f123d.sanity96 drwxr-xr-x 2 root root 8192 Apr 17 21:44 f123d.sanity97 drwxr-xr-x 2 root root 8192 Apr 17 21:44 f123d.sanity98 drwxr-xr-x 2 root root 8192 Apr 17 21:44 f123d.sanity99 statahead total: 1 statahead wrong: 0 agl total: 1 list_total: 1 fname_total: 0 hit_total: 99 miss_total: 1 PASS 123d (5s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 123e: statahead with large wide striping == 21:44:58 (1713404698) llite.lustre-ffff8800a9693000.statahead_max=2048 llite.lustre-ffff8800a9693000.statahead_batch_max=1024 total 0 -rw-r--r-- 1 root root 0 Apr 17 21:44 f123e.sanity.0 -rw-r--r-- 1 root root 0 Apr 17 21:44 f123e.sanity.1 -rw-r--r-- 1 root root 0 Apr 17 21:44 f123e.sanity.10 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.100 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.1000 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.101 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.102 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.103 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.104 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.105 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.106 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.107 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.108 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.109 -rw-r--r-- 1 root root 0 Apr 17 21:44 f123e.sanity.11 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.110 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.111 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.112 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.113 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.114 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.115 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.116 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.117 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.118 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.119 -rw-r--r-- 1 root root 0 Apr 17 21:44 f123e.sanity.12 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.120 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.121 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.122 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.123 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.124 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.125 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.126 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.127 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.128 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.129 -rw-r--r-- 1 root root 0 Apr 17 21:44 f123e.sanity.13 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.130 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.131 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.132 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.133 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.134 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.135 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.136 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.137 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.138 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.139 -rw-r--r-- 1 root root 0 Apr 17 21:44 f123e.sanity.14 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.140 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.141 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.142 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.143 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.144 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.145 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.146 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.147 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.148 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.149 -rw-r--r-- 1 root root 0 Apr 17 21:44 f123e.sanity.15 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.150 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.151 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.152 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.153 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.154 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.155 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.156 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.157 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.158 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.159 -rw-r--r-- 1 root root 0 Apr 17 21:44 f123e.sanity.16 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.160 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.161 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.162 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.163 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.164 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.165 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.166 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.167 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.168 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.169 -rw-r--r-- 1 root root 0 Apr 17 21:44 f123e.sanity.17 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.170 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.171 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.172 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.173 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.174 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.175 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.176 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.177 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.178 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.179 -rw-r--r-- 1 root root 0 Apr 17 21:44 f123e.sanity.18 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.180 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.181 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.182 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.183 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.184 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.185 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.186 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.187 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.188 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.189 -rw-r--r-- 1 root root 0 Apr 17 21:44 f123e.sanity.19 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.190 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.191 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.192 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.193 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.194 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.195 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.196 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.197 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.198 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.199 -rw-r--r-- 1 root root 0 Apr 17 21:44 f123e.sanity.2 -rw-r--r-- 1 root root 0 Apr 17 21:44 f123e.sanity.20 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.200 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.201 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.202 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.203 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.204 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.205 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.206 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.207 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.208 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.209 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.21 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.210 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.211 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.212 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.213 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.214 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.215 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.216 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.217 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.218 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.219 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.22 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.220 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.221 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.222 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.223 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.224 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.225 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.226 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.227 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.228 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.229 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.23 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.230 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.231 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.232 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.233 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.234 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.235 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.236 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.237 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.238 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.239 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.24 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.240 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.241 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.242 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.243 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.244 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.245 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.246 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.247 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.248 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.249 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.25 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.250 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.251 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.252 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.253 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.254 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.255 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.256 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.257 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.258 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.259 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.26 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.260 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.261 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.262 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.263 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.264 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.265 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.266 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.267 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.268 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.269 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.27 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.270 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.271 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.272 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.273 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.274 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.275 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.276 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.277 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.278 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.279 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.28 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.280 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.281 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.282 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.283 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.284 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.285 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.286 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.287 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.288 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.289 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.29 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.290 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.291 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.292 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.293 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.294 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.295 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.296 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.297 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.298 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.299 -rw-r--r-- 1 root root 0 Apr 17 21:44 f123e.sanity.3 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.30 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.300 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.301 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.302 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.303 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.304 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.305 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.306 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.307 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.308 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.309 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.31 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.310 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.311 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.312 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.313 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.314 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.315 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.316 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.317 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.318 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.319 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.32 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.320 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.321 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.322 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.323 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.324 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.325 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.326 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.327 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.328 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.329 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.33 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.330 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.331 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.332 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.333 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.334 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.335 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.336 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.337 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.338 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.339 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.34 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.340 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.341 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.342 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.343 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.344 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.345 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.346 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.347 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.348 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.349 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.35 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.350 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.351 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.352 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.353 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.354 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.355 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.356 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.357 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.358 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.359 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.36 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.360 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.361 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.362 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.363 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.364 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.365 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.366 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.367 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.368 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.369 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.37 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.370 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.371 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.372 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.373 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.374 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.375 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.376 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.377 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.378 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.379 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.38 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.380 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.381 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.382 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.383 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.384 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.385 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.386 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.387 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.388 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.389 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.39 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.390 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.391 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.392 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.393 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.394 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.395 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.396 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.397 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.398 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.399 -rw-r--r-- 1 root root 0 Apr 17 21:44 f123e.sanity.4 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.40 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.400 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.401 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.402 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.403 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.404 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.405 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.406 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.407 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.408 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.409 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.41 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.410 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.411 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.412 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.413 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.414 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.415 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.416 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.417 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.418 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.419 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.42 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.420 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.421 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.422 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.423 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.424 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.425 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.426 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.427 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.428 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.429 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.43 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.430 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.431 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.432 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.433 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.434 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.435 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.436 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.437 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.438 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.439 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.44 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.440 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.441 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.442 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.443 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.444 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.445 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.446 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.447 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.448 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.449 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.45 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.450 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.451 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.452 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.453 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.454 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.455 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.456 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.457 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.458 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.459 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.46 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.460 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.461 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.462 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.463 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.464 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.465 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.466 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.467 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.468 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.469 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.47 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.470 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.471 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.472 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.473 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.474 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.475 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.476 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.477 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.478 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.479 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.48 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.480 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.481 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.482 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.483 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.484 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.485 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.486 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.487 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.488 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.489 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.49 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.490 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.491 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.492 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.493 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.494 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.495 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.496 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.497 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.498 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.499 -rw-r--r-- 1 root root 0 Apr 17 21:44 f123e.sanity.5 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.50 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.500 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.501 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.502 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.503 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.504 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.505 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.506 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.507 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.508 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.509 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.51 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.510 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.511 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.512 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.513 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.514 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.515 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.516 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.517 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.518 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.519 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.52 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.520 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.521 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.522 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.523 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.524 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.525 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.526 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.527 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.528 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.529 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.53 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.530 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.531 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.532 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.533 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.534 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.535 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.536 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.537 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.538 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.539 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.54 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.540 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.541 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.542 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.543 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.544 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.545 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.546 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.547 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.548 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.549 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.55 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.550 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.551 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.552 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.553 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.554 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.555 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.556 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.557 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.558 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.559 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.56 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.560 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.561 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.562 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.563 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.564 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.565 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.566 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.567 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.568 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.569 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.57 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.570 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.571 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.572 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.573 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.574 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.575 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.576 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.577 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.578 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.579 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.58 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.580 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.581 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.582 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.583 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.584 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.585 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.586 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.587 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.588 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.589 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.59 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.590 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.591 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.592 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.593 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.594 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.595 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.596 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.597 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.598 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.599 -rw-r--r-- 1 root root 0 Apr 17 21:44 f123e.sanity.6 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.60 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.600 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.601 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.602 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.603 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.604 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.605 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.606 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.607 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.608 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.609 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.61 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.610 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.611 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.612 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.613 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.614 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.615 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.616 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.617 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.618 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.619 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.62 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.620 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.621 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.622 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.623 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.624 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.625 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.626 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.627 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.628 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.629 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.63 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.630 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.631 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.632 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.633 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.634 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.635 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.636 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.637 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.638 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.639 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.64 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.640 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.641 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.642 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.643 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.644 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.645 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.646 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.647 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.648 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.649 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.65 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.650 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.651 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.652 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.653 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.654 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.655 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.656 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.657 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.658 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.659 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.66 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.660 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.661 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.662 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.663 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.664 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.665 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.666 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.667 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.668 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.669 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.67 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.670 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.671 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.672 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.673 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.674 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.675 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.676 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.677 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.678 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.679 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.68 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.680 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.681 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.682 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.683 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.684 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.685 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.686 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.687 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.688 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.689 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.69 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.690 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.691 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.692 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.693 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.694 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.695 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.696 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.697 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.698 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.699 -rw-r--r-- 1 root root 0 Apr 17 21:44 f123e.sanity.7 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.70 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.700 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.701 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.702 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.703 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.704 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.705 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.706 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.707 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.708 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.709 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.71 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.710 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.711 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.712 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.713 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.714 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.715 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.716 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.717 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.718 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.719 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.72 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.720 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.721 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.722 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.723 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.724 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.725 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.726 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.727 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.728 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.729 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.73 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.730 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.731 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.732 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.733 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.734 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.735 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.736 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.737 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.738 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.739 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.74 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.740 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.741 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.742 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.743 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.744 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.745 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.746 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.747 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.748 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.749 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.75 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.750 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.751 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.752 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.753 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.754 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.755 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.756 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.757 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.758 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.759 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.76 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.760 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.761 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.762 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.763 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.764 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.765 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.766 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.767 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.768 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.769 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.77 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.770 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.771 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.772 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.773 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.774 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.775 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.776 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.777 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.778 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.779 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.78 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.780 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.781 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.782 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.783 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.784 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.785 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.786 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.787 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.788 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.789 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.79 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.790 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.791 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.792 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.793 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.794 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.795 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.796 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.797 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.798 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.799 -rw-r--r-- 1 root root 0 Apr 17 21:44 f123e.sanity.8 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.80 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.800 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.801 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.802 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.803 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.804 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.805 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.806 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.807 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.808 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.809 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.81 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.810 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.811 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.812 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.813 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.814 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.815 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.816 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.817 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.818 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.819 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.82 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.820 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.821 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.822 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.823 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.824 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.825 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.826 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.827 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.828 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.829 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.83 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.830 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.831 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.832 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.833 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.834 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.835 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.836 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.837 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.838 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.839 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.84 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.840 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.841 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.842 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.843 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.844 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.845 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.846 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.847 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.848 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.849 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.85 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.850 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.851 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.852 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.853 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.854 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.855 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.856 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.857 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.858 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.859 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.86 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.860 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.861 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.862 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.863 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.864 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.865 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.866 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.867 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.868 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.869 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.87 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.870 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.871 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.872 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.873 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.874 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.875 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.876 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.877 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.878 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.879 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.88 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.880 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.881 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.882 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.883 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.884 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.885 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.886 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.887 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.888 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.889 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.89 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.890 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.891 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.892 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.893 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.894 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.895 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.896 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.897 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.898 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.899 -rw-r--r-- 1 root root 0 Apr 17 21:44 f123e.sanity.9 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.90 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.900 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.901 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.902 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.903 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.904 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.905 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.906 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.907 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.908 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.909 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.91 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.910 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.911 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.912 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.913 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.914 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.915 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.916 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.917 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.918 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.919 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.92 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.920 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.921 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.922 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.923 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.924 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.925 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.926 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.927 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.928 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.929 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.93 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.930 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.931 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.932 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.933 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.934 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.935 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.936 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.937 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.938 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.939 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.94 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.940 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.941 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.942 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.943 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.944 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.945 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.946 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.947 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.948 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.949 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.95 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.950 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.951 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.952 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.953 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.954 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.955 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.956 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.957 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.958 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.959 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.96 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.960 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.961 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.962 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.963 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.964 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.965 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.966 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.967 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.968 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.969 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.97 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.970 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.971 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.972 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.973 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.974 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.975 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.976 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.977 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.978 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.979 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.98 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.980 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.981 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.982 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.983 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.984 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.985 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.986 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.987 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.988 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.989 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.99 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.990 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.991 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.992 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.993 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.994 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.995 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.996 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.997 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.998 -rw-r--r-- 1 root root 0 Apr 17 21:45 f123e.sanity.999 mdc.lustre-MDT0000-mdc-ffff8800a9693000.batch_stats= snapshot_time: 1713404752.948572464 secs.nsecs start_time: 0.000000000 secs.nsecs elapsed_time: 1713404752.948572464 secs.nsecs subreqs per batch batches % cum % 1: 0 0 0 2: 0 0 0 4: 1 10 10 8: 2 20 30 16: 2 20 50 32: 2 20 70 64: 1 10 80 128: 1 10 90 256: 0 0 90 512: 0 0 90 1024: 1 10 100 mdc.lustre-MDT0001-mdc-ffff8800a9693000.batch_stats= snapshot_time: 1713404752.948774890 secs.nsecs start_time: 0.000000000 secs.nsecs elapsed_time: 1713404752.948774890 secs.nsecs subreqs per batch batches % cum % 1: 0 0 0 2: 0 0 0 4: 1 25 25 8: 2 50 75 16: 0 0 75 32: 0 0 75 64: 1 25 100 llite.lustre-ffff8800a9693000.statahead_agl=1 llite.lustre-ffff8800a9693000.statahead_batch_max=1024 llite.lustre-ffff8800a9693000.statahead_max=2048 llite.lustre-ffff8800a9693000.statahead_min=8 llite.lustre-ffff8800a9693000.statahead_running_max=16 llite.lustre-ffff8800a9693000.statahead_timeout=30 llite.lustre-ffff8800a9693000.statahead_stats= statahead total: 2 statahead wrong: 0 agl total: 2 list_total: 2 fname_total: 0 hit_total: 231 miss_total: 1738 llite.lustre-ffff8800a9693000.statahead_batch_max=64 llite.lustre-ffff8800a9693000.statahead_max=128 PASS 123e (73s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 123f: Retry mechanism with large wide striping files ========================================================== 21:46:14 (1713404774) llite.lustre-ffff8800a9693000.statahead_max=64 llite.lustre-ffff8800a9693000.statahead_batch_max=64 total 0 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.0 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.1 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.10 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.100 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.101 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.102 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.103 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.104 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.105 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.106 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.107 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.108 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.109 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.11 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.110 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.111 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.112 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.113 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.114 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.115 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.116 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.117 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.118 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.119 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.12 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.120 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.121 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.122 -rw-r--r-- 1 root root 0 Apr 17 21:47 f123f.sanity.123 -rw-r--r-- 1 root root 0 Apr 17 21:47 f123f.sanity.124 -rw-r--r-- 1 root root 0 Apr 17 21:47 f123f.sanity.125 -rw-r--r-- 1 root root 0 Apr 17 21:47 f123f.sanity.126 -rw-r--r-- 1 root root 0 Apr 17 21:47 f123f.sanity.127 -rw-r--r-- 1 root root 0 Apr 17 21:47 f123f.sanity.128 -rw-r--r-- 1 root root 0 Apr 17 21:47 f123f.sanity.129 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.13 -rw-r--r-- 1 root root 0 Apr 17 21:47 f123f.sanity.130 -rw-r--r-- 1 root root 0 Apr 17 21:47 f123f.sanity.131 -rw-r--r-- 1 root root 0 Apr 17 21:47 f123f.sanity.132 -rw-r--r-- 1 root root 0 Apr 17 21:47 f123f.sanity.133 -rw-r--r-- 1 root root 0 Apr 17 21:47 f123f.sanity.134 -rw-r--r-- 1 root root 0 Apr 17 21:47 f123f.sanity.135 -rw-r--r-- 1 root root 0 Apr 17 21:47 f123f.sanity.136 -rw-r--r-- 1 root root 0 Apr 17 21:47 f123f.sanity.137 -rw-r--r-- 1 root root 0 Apr 17 21:47 f123f.sanity.138 -rw-r--r-- 1 root root 0 Apr 17 21:47 f123f.sanity.139 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.14 -rw-r--r-- 1 root root 0 Apr 17 21:47 f123f.sanity.140 -rw-r--r-- 1 root root 0 Apr 17 21:47 f123f.sanity.141 -rw-r--r-- 1 root root 0 Apr 17 21:47 f123f.sanity.142 -rw-r--r-- 1 root root 0 Apr 17 21:47 f123f.sanity.143 -rw-r--r-- 1 root root 0 Apr 17 21:47 f123f.sanity.144 -rw-r--r-- 1 root root 0 Apr 17 21:47 f123f.sanity.145 -rw-r--r-- 1 root root 0 Apr 17 21:47 f123f.sanity.146 -rw-r--r-- 1 root root 0 Apr 17 21:47 f123f.sanity.147 -rw-r--r-- 1 root root 0 Apr 17 21:47 f123f.sanity.148 -rw-r--r-- 1 root root 0 Apr 17 21:47 f123f.sanity.149 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.15 -rw-r--r-- 1 root root 0 Apr 17 21:47 f123f.sanity.150 -rw-r--r-- 1 root root 0 Apr 17 21:47 f123f.sanity.151 -rw-r--r-- 1 root root 0 Apr 17 21:47 f123f.sanity.152 -rw-r--r-- 1 root root 0 Apr 17 21:47 f123f.sanity.153 -rw-r--r-- 1 root root 0 Apr 17 21:47 f123f.sanity.154 -rw-r--r-- 1 root root 0 Apr 17 21:47 f123f.sanity.155 -rw-r--r-- 1 root root 0 Apr 17 21:47 f123f.sanity.156 -rw-r--r-- 1 root root 0 Apr 17 21:47 f123f.sanity.157 -rw-r--r-- 1 root root 0 Apr 17 21:47 f123f.sanity.158 -rw-r--r-- 1 root root 0 Apr 17 21:47 f123f.sanity.159 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.16 -rw-r--r-- 1 root root 0 Apr 17 21:47 f123f.sanity.160 -rw-r--r-- 1 root root 0 Apr 17 21:47 f123f.sanity.161 -rw-r--r-- 1 root root 0 Apr 17 21:47 f123f.sanity.162 -rw-r--r-- 1 root root 0 Apr 17 21:47 f123f.sanity.163 -rw-r--r-- 1 root root 0 Apr 17 21:47 f123f.sanity.164 -rw-r--r-- 1 root root 0 Apr 17 21:47 f123f.sanity.165 -rw-r--r-- 1 root root 0 Apr 17 21:47 f123f.sanity.166 -rw-r--r-- 1 root root 0 Apr 17 21:47 f123f.sanity.167 -rw-r--r-- 1 root root 0 Apr 17 21:47 f123f.sanity.168 -rw-r--r-- 1 root root 0 Apr 17 21:47 f123f.sanity.169 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.17 -rw-r--r-- 1 root root 0 Apr 17 21:47 f123f.sanity.170 -rw-r--r-- 1 root root 0 Apr 17 21:47 f123f.sanity.171 -rw-r--r-- 1 root root 0 Apr 17 21:47 f123f.sanity.172 -rw-r--r-- 1 root root 0 Apr 17 21:47 f123f.sanity.173 -rw-r--r-- 1 root root 0 Apr 17 21:47 f123f.sanity.174 -rw-r--r-- 1 root root 0 Apr 17 21:47 f123f.sanity.175 -rw-r--r-- 1 root root 0 Apr 17 21:47 f123f.sanity.176 -rw-r--r-- 1 root root 0 Apr 17 21:47 f123f.sanity.177 -rw-r--r-- 1 root root 0 Apr 17 21:47 f123f.sanity.178 -rw-r--r-- 1 root root 0 Apr 17 21:47 f123f.sanity.179 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.18 -rw-r--r-- 1 root root 0 Apr 17 21:47 f123f.sanity.180 -rw-r--r-- 1 root root 0 Apr 17 21:47 f123f.sanity.181 -rw-r--r-- 1 root root 0 Apr 17 21:47 f123f.sanity.182 -rw-r--r-- 1 root root 0 Apr 17 21:47 f123f.sanity.183 -rw-r--r-- 1 root root 0 Apr 17 21:47 f123f.sanity.184 -rw-r--r-- 1 root root 0 Apr 17 21:47 f123f.sanity.185 -rw-r--r-- 1 root root 0 Apr 17 21:47 f123f.sanity.186 -rw-r--r-- 1 root root 0 Apr 17 21:47 f123f.sanity.187 -rw-r--r-- 1 root root 0 Apr 17 21:47 f123f.sanity.188 -rw-r--r-- 1 root root 0 Apr 17 21:47 f123f.sanity.189 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.19 -rw-r--r-- 1 root root 0 Apr 17 21:47 f123f.sanity.190 -rw-r--r-- 1 root root 0 Apr 17 21:47 f123f.sanity.191 -rw-r--r-- 1 root root 0 Apr 17 21:47 f123f.sanity.192 -rw-r--r-- 1 root root 0 Apr 17 21:47 f123f.sanity.193 -rw-r--r-- 1 root root 0 Apr 17 21:47 f123f.sanity.194 -rw-r--r-- 1 root root 0 Apr 17 21:47 f123f.sanity.195 -rw-r--r-- 1 root root 0 Apr 17 21:47 f123f.sanity.196 -rw-r--r-- 1 root root 0 Apr 17 21:47 f123f.sanity.197 -rw-r--r-- 1 root root 0 Apr 17 21:47 f123f.sanity.198 -rw-r--r-- 1 root root 0 Apr 17 21:47 f123f.sanity.199 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.2 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.20 -rw-r--r-- 1 root root 0 Apr 17 21:47 f123f.sanity.200 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.21 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.22 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.23 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.24 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.25 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.26 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.27 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.28 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.29 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.3 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.30 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.31 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.32 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.33 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.34 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.35 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.36 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.37 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.38 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.39 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.4 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.40 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.41 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.42 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.43 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.44 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.45 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.46 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.47 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.48 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.49 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.5 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.50 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.51 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.52 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.53 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.54 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.55 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.56 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.57 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.58 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.59 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.6 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.60 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.61 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.62 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.63 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.64 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.65 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.66 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.67 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.68 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.69 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.7 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.70 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.71 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.72 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.73 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.74 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.75 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.76 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.77 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.78 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.79 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.8 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.80 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.81 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.82 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.83 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.84 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.85 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.86 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.87 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.88 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.89 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.9 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.90 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.91 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.92 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.93 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.94 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.95 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.96 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.97 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.98 -rw-r--r-- 1 root root 0 Apr 17 21:46 f123f.sanity.99 mdc.lustre-MDT0000-mdc-ffff8800a9693000.batch_stats= snapshot_time: 1713404911.538586776 secs.nsecs start_time: 0.000000000 secs.nsecs elapsed_time: 1713404911.538586776 secs.nsecs subreqs per batch batches % cum % 1: 17 48 48 2: 4 11 60 4: 1 2 62 8: 3 8 71 16: 3 8 80 32: 3 8 88 64: 2 5 94 128: 1 2 97 256: 0 0 97 512: 0 0 97 1024: 1 2 100 mdc.lustre-MDT0001-mdc-ffff8800a9693000.batch_stats= snapshot_time: 1713404911.538687493 secs.nsecs start_time: 0.000000000 secs.nsecs elapsed_time: 1713404911.538687493 secs.nsecs subreqs per batch batches % cum % 1: 0 0 0 2: 0 0 0 4: 1 25 25 8: 2 50 75 16: 0 0 75 32: 0 0 75 64: 1 25 100 llite.lustre-ffff8800a9693000.statahead_agl=1 llite.lustre-ffff8800a9693000.statahead_batch_max=64 llite.lustre-ffff8800a9693000.statahead_max=64 llite.lustre-ffff8800a9693000.statahead_min=8 llite.lustre-ffff8800a9693000.statahead_running_max=16 llite.lustre-ffff8800a9693000.statahead_timeout=30 llite.lustre-ffff8800a9693000.statahead_stats= statahead total: 3 statahead wrong: 1 agl total: 3 list_total: 3 fname_total: 0 hit_total: 265 miss_total: 1748 llite.lustre-ffff8800a9693000.statahead_max=128 llite.lustre-ffff8800a9693000.statahead_batch_max=64 PASS 123f (180s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 123g: Test for stat-ahead advise ========== 21:49:16 (1713404956) total: 1000 open/close in 2.42 seconds: 413.87 ops/second llite.lustre-ffff8800a9693000.statahead_stats=clear mdc.lustre-MDT0000-mdc-ffff8800a9693000.batch_stats=clear mdc.lustre-MDT0001-mdc-ffff8800a9693000.batch_stats=clear statahead total: 1 statahead wrong: 0 agl total: 1 list_total: 0 fname_total: 0 hit_total: 1000 miss_total: 0 snapshot_time: 1713404965.590428786 secs.nsecs start_time: 1713404964.670193930 secs.nsecs elapsed_time: 0.920234856 secs.nsecs subreqs per batch batches % cum % 1: 5 17 17 2: 1 3 20 4: 1 3 24 8: 7 24 48 16: 0 0 48 32: 0 0 48 64: 15 51 100 snapshot_time: 1713404965.590526803 secs.nsecs start_time: 1713404964.670302312 secs.nsecs elapsed_time: 0.920224491 secs.nsecs subreqs per batch batches % cum % 1: 0 0 0 Hit total: 1000 PASS 123g (10s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 123h: Verify statahead work with the fname pattern via du ========================================================== 21:49:28 (1713404968) llite.lustre-ffff8800a9693000.enable_statahead_fname=1 Scan a directory with number regularized fname llite.lustre-ffff8800a9693000.statahead_stats=clear mdc.lustre-MDT0000-mdc-ffff8800a9693000.batch_stats=0 mdc.lustre-MDT0001-mdc-ffff8800a9693000.batch_stats=0 llite.lustre-ffff8800a9693000.statahead_max=1024 llite.lustre-ffff8800a9693000.statahead_batch_max=1024 statahead total: 0 statahead wrong: 0 agl total: 0 list_total: 0 fname_total: 0 hit_total: 0 miss_total: 0 Wait statahead thread (ll_sa_xxx) to exit... Waiting 35s for '' statahead total: 1 statahead wrong: 0 agl total: 1 list_total: 0 fname_total: 1 hit_total: 9993 miss_total: 0 snapshot_time: 1713405132.481532624 secs.nsecs start_time: 1713405119.716688015 secs.nsecs elapsed_time: 12.764844609 secs.nsecs subreqs per batch batches % cum % 1: 1 7 7 2: 0 0 7 4: 0 0 7 8: 1 7 15 16: 1 7 23 32: 0 0 23 64: 0 0 23 128: 0 0 23 256: 0 0 23 512: 0 0 23 1024: 10 76 100 snapshot_time: 1713405132.481741711 secs.nsecs start_time: 1713405119.716859021 secs.nsecs elapsed_time: 12.764882690 secs.nsecs subreqs per batch batches % cum % 1: 0 0 0 Scan a directory with zeroed padding number regularized fname llite.lustre-ffff8800a9693000.statahead_stats=clear mdc.lustre-MDT0000-mdc-ffff8800a9693000.batch_stats=0 mdc.lustre-MDT0001-mdc-ffff8800a9693000.batch_stats=0 llite.lustre-ffff8800a9693000.statahead_max=1024 llite.lustre-ffff8800a9693000.statahead_batch_max=1024 statahead total: 0 statahead wrong: 0 agl total: 0 list_total: 0 fname_total: 0 hit_total: 0 miss_total: 0 Wait statahead thread (ll_sa_xxx) to exit... Waiting 35s for '' statahead total: 1 statahead wrong: 0 agl total: 1 list_total: 0 fname_total: 1 hit_total: 9993 miss_total: 0 snapshot_time: 1713405394.471515086 secs.nsecs start_time: 1713405381.711752276 secs.nsecs elapsed_time: 12.759762810 secs.nsecs subreqs per batch batches % cum % 1: 1 7 7 2: 0 0 7 4: 0 0 7 8: 1 7 15 16: 1 7 23 32: 0 0 23 64: 0 0 23 128: 0 0 23 256: 0 0 23 512: 0 0 23 1024: 10 76 100 snapshot_time: 1713405394.471770423 secs.nsecs start_time: 1713405381.711928549 secs.nsecs elapsed_time: 12.759841874 secs.nsecs subreqs per batch batches % cum % 1: 0 0 0 llite.lustre-ffff8800a9693000.enable_statahead_fname=0 llite.lustre-ffff8800a9693000.statahead_batch_max=64 llite.lustre-ffff8800a9693000.statahead_max=128 PASS 123h (472s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 123i: Verify statahead work with the fname indexing pattern ========================================================== 21:57:21 (1713405441) llite.lustre-ffff8800a9693000.statahead_max=1024 llite.lustre-ffff8800a9693000.statahead_batch_max=32 llite.lustre-ffff8800a9693000.statahead_min=64 llite.lustre-ffff8800a9693000.enable_statahead_fname=1 Command: - createmany -m /mnt/lustre/d123i.sanity/f123i.sanity.%06d 1000 - ls /mnt/lustre/d123i.sanity/* > /dev/null total: 1000 create in 1.06 seconds: 939.52 ops/second llite.lustre-ffff8800a9693000.statahead_stats=clear mdc.lustre-MDT0000-mdc-ffff8800a9693000.batch_stats=0 mdc.lustre-MDT0001-mdc-ffff8800a9693000.batch_stats=0 statahead_stats (Pre): statahead total: 0 statahead wrong: 0 agl total: 0 list_total: 0 fname_total: 0 hit_total: 0 miss_total: 0 statahead_stats (Post): statahead total: 1 statahead wrong: 0 agl total: 1 list_total: 0 fname_total: 1 hit_total: 0 miss_total: 0 snapshot_time: 1713405444.787982059 secs.nsecs start_time: 1713405444.141229247 secs.nsecs elapsed_time: 0.646752812 secs.nsecs subreqs per batch batches % cum % 1: 6 8 8 2: 0 0 8 4: 7 10 18 8: 0 0 18 16: 0 0 18 32: 57 81 100 snapshot_time: 1713405444.788179868 secs.nsecs start_time: 1713405444.141314358 secs.nsecs elapsed_time: 0.646865510 secs.nsecs subreqs per batch batches % cum % 1: 0 0 0 Wait the statahead thread (ll_sa_xxx) to exit ... Waiting 35s for '' Waiting 25s for '' Waiting 15s for '' Waiting 5s for '' statahead total: 1 statahead wrong: 0 agl total: 1 list_total: 0 fname_total: 1 hit_total: 994 miss_total: 1 snapshot_time: 1713405475.623022778 secs.nsecs start_time: 1713405444.141229247 secs.nsecs elapsed_time: 31.481793531 secs.nsecs subreqs per batch batches % cum % 1: 7 9 9 2: 0 0 9 4: 7 9 18 8: 0 0 18 16: 1 1 19 32: 62 80 100 snapshot_time: 1713405475.623150232 secs.nsecs start_time: 1713405444.141314358 secs.nsecs elapsed_time: 31.481835874 secs.nsecs subreqs per batch batches % cum % 1: 0 0 0 Command: - createmany -m /mnt/lustre/d123i.sanity/f123i.sanity 1000 - aheadmany -c stat -N -s 0 -e 1000 -b f123i.sanity -d /mnt/lustre/d123i.sanity total: 1000 create in 1.12 seconds: 891.09 ops/second llite.lustre-ffff8800a9693000.statahead_stats=clear mdc.lustre-MDT0000-mdc-ffff8800a9693000.batch_stats=0 mdc.lustre-MDT0001-mdc-ffff8800a9693000.batch_stats=0 statahead_stats (Pre): statahead total: 0 statahead wrong: 0 agl total: 0 list_total: 0 fname_total: 0 hit_total: 0 miss_total: 0 statahead_stats (Post): statahead total: 1 statahead wrong: 0 agl total: 1 list_total: 0 fname_total: 1 hit_total: 0 miss_total: 0 snapshot_time: 1713405478.238387194 secs.nsecs start_time: 1713405477.686699406 secs.nsecs elapsed_time: 0.551687788 secs.nsecs subreqs per batch batches % cum % 1: 0 0 0 2: 0 0 0 4: 0 0 0 8: 2 3 3 16: 0 0 3 32: 57 96 100 snapshot_time: 1713405478.238594061 secs.nsecs start_time: 1713405477.686779516 secs.nsecs elapsed_time: 0.551814545 secs.nsecs subreqs per batch batches % cum % 1: 0 0 0 Wait the statahead thread (ll_sa_xxx) to exit ... Waiting 35s for '' Waiting 25s for '' Waiting 15s for '' statahead total: 1 statahead wrong: 0 agl total: 1 list_total: 0 fname_total: 1 hit_total: 995 miss_total: 0 snapshot_time: 1713405509.016442898 secs.nsecs start_time: 1713405477.686699406 secs.nsecs elapsed_time: 31.329743492 secs.nsecs subreqs per batch batches % cum % 1: 1 1 1 2: 0 0 1 4: 0 0 1 8: 2 3 4 16: 0 0 4 32: 63 95 100 snapshot_time: 1713405509.016534661 secs.nsecs start_time: 1713405477.686779516 secs.nsecs elapsed_time: 31.329755145 secs.nsecs subreqs per batch batches % cum % 1: 0 0 0 - unlinked 0 (time 1713405510 ; total 0 ; last 0) total: 1000 unlinks in 1 seconds: 1000.000000 unlinks/second - unlinked 0 (time 1713405512 ; total 0 ; last 0) total: 1000 unlinks in 2 seconds: 500.000000 unlinks/second llite.lustre-ffff8800a9693000.enable_statahead_fname=0 llite.lustre-ffff8800a9693000.statahead_min=8 llite.lustre-ffff8800a9693000.statahead_batch_max=64 llite.lustre-ffff8800a9693000.statahead_max=128 PASS 123i (73s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 124a: lru resize ================================================================================================= 21:58:36 (1713405516) striped dir -i0 -c2 -H all_char /mnt/lustre/d124a.sanity create 2000 files at /mnt/lustre/d124a.sanity total: 2000 open/close in 4.22 seconds: 473.40 ops/second NSDIR=ldlm.namespaces.lustre-MDT0000-mdc-ffff8800a9693000 NS=ldlm.namespaces.lustre-MDT0000-mdc-ffff8800a9693000 LRU=1004 LIMIT=46624 LVF=5572500 OLD_LVF=100 Sleep 50 sec ...1004...1004...1004...1004...1004...971...971...734...734...590 Dropped 414 locks in 50s unlink 2000 files at /mnt/lustre/d124a.sanity - unlinked 0 (time 1713405577 ; total 0 ; last 0) total: 2000 unlinks in 3 seconds: 666.666687 unlinks/second PASS 124a (65s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 124b: lru resize (performance test) ================================================================================= 21:59:43 (1713405583) ldlm.namespaces.lustre-MDT0000-mdc-ffff8800a9693000.lru_size=400 ldlm.namespaces.lustre-MDT0001-mdc-ffff8800a9693000.lru_size=400 striped dir -i0 -c2 -H fnv_1a_64 /mnt/lustre/d124b.sanity/disable_lru_resize - open/close 3647 (time 1713405594.96 total 10.00 last 364.69) - open/close 6129 (time 1713405604.97 total 20.00 last 248.11) total: 8000 open/close in 27.86 seconds: 287.17 ops/second doing ls -la /mnt/lustre/d124b.sanity/disable_lru_resize 3 times ls -la time: 70 seconds lru_size = 400 400 - unlinked 0 (time 1713405686 ; total 0 ; last 0) total: 8000 unlinks in 30 seconds: 266.666656 unlinks/second ldlm.namespaces.lustre-MDT0000-mdc-ffff8800a9693000.lru_size=0 ldlm.namespaces.lustre-MDT0001-mdc-ffff8800a9693000.lru_size=0 striped dir -i0 -c2 -H crush2 /mnt/lustre/d124b.sanity/enable_lru_resize - open/close 2508 (time 1713405728.27 total 10.00 last 250.74) - open/close 4928 (time 1713405738.27 total 20.00 last 241.95) - open/close 7329 (time 1713405748.27 total 30.00 last 240.09) total: 8000 open/close in 31.47 seconds: 254.18 ops/second doing ls -la /mnt/lustre/d124b.sanity/enable_lru_resize 3 times ls -la time: 10 seconds lru_size = 4052 3955 ls -la is 85% faster with lru resize enabled - unlinked 0 (time 1713405762 ; total 0 ; last 0) total: 8000 unlinks in 16 seconds: 500.000000 unlinks/second PASS 124b (197s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 124c: LRUR cancel very aged locks ========= 22:03:03 (1713405783) striped dir -i0 -c2 -H fnv_1a_64 /mnt/lustre/d124c.sanity total: 100 open/close in 0.95 seconds: 105.22 ops/second unused=55, max_age=3900000, recalc_p=10 ldlm.namespaces.lustre-MDT0000-mdc-ffff8800a9693000.lru_max_age=1000 sleep 20 seconds... - unlinked 0 (time 1713405805 ; total 0 ; last 0) total: 100 unlinks in 0 seconds: inf unlinks/second PASS 124c (24s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 124d: cancel very aged locks if lru-resize disabled ========================================================== 22:03:30 (1713405810) ldlm.namespaces.lustre-MDT0000-mdc-ffff8800a9693000.lru_size=400 ldlm.namespaces.lustre-MDT0001-mdc-ffff8800a9693000.lru_size=400 striped dir -i0 -c2 -H crush /mnt/lustre/d124d.sanity total: 100 open/close in 0.97 seconds: 103.44 ops/second unused=57, max_age=3900000, recalc_p=10 ldlm.namespaces.lustre-MDT0000-mdc-ffff8800a9693000.lru_max_age=1000 sleep 20 seconds... - unlinked 0 (time 1713405833 ; total 0 ; last 0) total: 100 unlinks in 1 seconds: 100.000000 unlinks/second ldlm.namespaces.lustre-MDT0000-mdc-ffff8800a9693000.lru_size=0 ldlm.namespaces.lustre-MDT0001-mdc-ffff8800a9693000.lru_size=0 PASS 124d (24s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 125: don't return EPROTO when a dir has a non-default striping and ACLs ========================================================== 22:03:56 (1713405836) uid=500(sanityusr) gid=500(sanityusr) groups=500(sanityusr) striped dir -i1 -c2 -H crush /mnt/lustre/d125.sanity drwxrwxr-x+ 2 root root 8192 Apr 17 22:03 /mnt/lustre/d125.sanity PASS 125 (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 126: check that the fsgid provided by the client is taken into account ========================================================== 22:04:02 (1713405842) running as uid/gid/euid/egid 0/1/0/1, groups: [touch] [/mnt/lustre/f126.sanity] PASS 126 (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 127a: verify the client stats are sane ==== 22:04:06 (1713405846) enable_stats_header=1 stats before reset osc.lustre-OST0000-osc-ffff8800a9693000.stats= snapshot_time 1713405847.107559930 secs.nsecs start_time 1713404693.228741703 secs.nsecs elapsed_time 1153.878818227 secs.nsecs req_waittime 296993 samples [usecs] 525 653066 13162240004 1555131284679444 req_active 296993 samples [reqs] 1 1169 41028061 12912167739 ldlm_glimpse_enqueue 135231 samples [reqs] 1 1 135231 135231 ost_setattr 125442 samples [usecs] 713 257698 5163541701 368789948167407 ost_connect 1 samples [usecs] 2330 2330 2330 5428900 ost_statfs 1 samples [usecs] 1501 1501 1501 2253001 ldlm_cancel 36205 samples [usecs] 525 645023 235069576 11751850323972 obd_ping 113 samples [usecs] 544 177448 449292 32997358016 osc.lustre-OST0001-osc-ffff8800a9693000.stats= snapshot_time 1713405847.107711340 secs.nsecs start_time 1713404693.229910411 secs.nsecs elapsed_time 1153.877800929 secs.nsecs req_waittime 299551 samples [usecs] 407 757638 13411745485 1574627436568437 req_active 299551 samples [reqs] 1 1144 43228553 14326280083 ldlm_glimpse_enqueue 136508 samples [reqs] 1 1 136508 136508 ost_setattr 126686 samples [usecs] 679 252163 5224312878 362149012981268 ost_connect 1 samples [usecs] 1877 1877 1877 3523129 ost_statfs 1 samples [usecs] 1785 1785 1785 3186225 ldlm_cancel 36244 samples [usecs] 407 757638 262688435 29678482283929 obd_ping 111 samples [usecs] 501 176291 417050 31959941526 osc.lustre-OST0000-osc-ffff8800a9693000.stats=0 osc.lustre-OST0001-osc-ffff8800a9693000.stats=0 1+0 records in 1+0 records out 2097152 bytes (2.1 MB) copied, 0.0808189 s, 25.9 MB/s 1+0 records in 1+0 records out 2097152 bytes (2.1 MB) copied, 0.138096 s, 15.2 MB/s got name=req_waittime count=7 unit=[usecs] min=2402 max=40094 got name=req_active count=7 unit=[reqs] min=1 max=1 got name=ldlm_extent_enqueue count=2 unit=[reqs] min=1 max=1 got name=read_bytes count=1 unit=[bytes] min=2097152 max=2097152 got name=write_bytes count=1 unit=[bytes] min=2097152 max=2097152 got name=ost_read count=1 unit=[usecs] min=19181 max=19181 got name=ost_write count=1 unit=[usecs] min=28762 max=28762 got name=ost_punch count=1 unit=[usecs] min=2547 max=2547 got name=ldlm_cancel count=2 unit=[usecs] min=9727 max=40094 PASS 127a (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 127b: verify the llite client stats are sane ========================================================== 22:04:11 (1713405851) stats before reset llite.lustre-ffff8800a9693000.stats= snapshot_time 1713405852.300787177 secs.nsecs start_time 1713404693.202423507 secs.nsecs elapsed_time 1159.098363670 secs.nsecs read_bytes 1 samples [bytes] 2097152 2097152 2097152 4398046511104 write_bytes 1 samples [bytes] 2097152 2097152 2097152 4398046511104 read 1 samples [usecs] 133771 133771 133771 17894680441 write 1 samples [usecs] 73626 73626 73626 5420787876 ioctl 79 samples [reqs] open 40438 samples [usecs] 3 21790 1806787 914162167 close 40438 samples [usecs] 25 64206 84781485 232579329655 seek 1 samples [usecs] 17 17 17 289 readdir 89 samples [usecs] 3 133926 1052472 61845760510 setattr 21205 samples [usecs] 4030 551176 244156831 14732532283395 truncate 1 samples [usecs] 10633 10633 10633 113060689 getattr 75575 samples [usecs] 83 1226568 108593238 21719935951856 create 2000 samples [usecs] 615 39104 2124430 3766541280 unlink 41405 samples [usecs] 633 313346 162604036 2799488318322 mkdir 7 samples [usecs] 2872 8180 32377 169125665 rmdir 4 samples [usecs] 4401 9269 25925 185810779 mknod 42406 samples [usecs] 614 255630 157809296 2478723909994 statfs 10 samples [usecs] 166 45104 95875 3190820867 setxattr 1 samples [usecs] 22323 22323 22323 498316329 getxattr 125 samples [usecs] 9 7200 269407 732419453 getxattr_hits 10 samples [reqs] inode_permission 785994 samples [usecs] 0 12219 27291339 5721768717 opencount 40443 samples [reqs] 1 4 40462 40516 openclosetime 14 samples [usecs] 2113 133173314 244590209 24074556839123903 llite.lustre-ffff8800a9693000.stats=0 1+0 records in 1+0 records out 4096 bytes (4.1 kB) copied, 0.00807605 s, 507 kB/s 1+0 records in 1+0 records out 4096 bytes (4.1 kB) copied, 0.00191988 s, 2.1 MB/s 1+0 records in 1+0 records out 4096 bytes (4.1 kB) copied, 0.00824719 s, 497 kB/s 1+0 records in 1+0 records out 4096 bytes (4.1 kB) copied, 0.000639827 s, 6.4 MB/s got name=read_bytes count=2 unit=[bytes] min=4096 max=4096 got name=write_bytes count=2 unit=[bytes] min=4096 max=4096 got name=read count=2 unit=[usecs] min=181 max=7616 got name=write count=2 unit=[usecs] min=1087 max=4480 got name=open count=4 unit=[usecs] min=32 max=8022 got name=close count=4 unit=[usecs] min=109 max=3162 got name=seek count=2 unit=[usecs] min=16 max=18 got name=truncate count=1 unit=[usecs] min=11043 max=11043 got name=mknod count=1 unit=[usecs] min=5631 max=5631 got name=getxattr count=1 unit=[usecs] min=3539 max=3539 got name=inode_permission count=9 unit=[usecs] min=4 max=3831 got name=opencount count=4 unit=[reqs] min=1 max=4 got name=openclosetime count=3 unit=[usecs] min=4096 max=64578 PASS 127b (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 127c: test llite extent stats with regular & mmap i/o ========================================================== 22:04:16 (1713405856) llite.lustre-ffff8800a9693000.extents_stats=1 1+0 records in 1+0 records out 3072 bytes (3.1 kB) copied, 0.00482941 s, 636 kB/s 1+0 records in 1+0 records out 3072 bytes (3.1 kB) copied, 0.00861232 s, 357 kB/s 1+0 records in 1+0 records out 3072 bytes (3.1 kB) copied, 0.00375136 s, 819 kB/s 1+0 records in 1+0 records out 3072 bytes (3.1 kB) copied, 0.000621727 s, 4.9 MB/s 1+0 records in 1+0 records out 6144 bytes (6.1 kB) copied, 0.00749207 s, 820 kB/s 1+0 records in 1+0 records out 6144 bytes (6.1 kB) copied, 0.00195724 s, 3.1 MB/s 1+0 records in 1+0 records out 6144 bytes (6.1 kB) copied, 0.000603964 s, 10.2 MB/s 1+0 records in 1+0 records out 6144 bytes (6.1 kB) copied, 0.000621486 s, 9.9 MB/s 1+0 records in 1+0 records out 12288 bytes (12 kB) copied, 0.00802711 s, 1.5 MB/s 1+0 records in 1+0 records out 12288 bytes (12 kB) copied, 0.00201476 s, 6.1 MB/s 1+0 records in 1+0 records out 12288 bytes (12 kB) copied, 0.00062197 s, 19.8 MB/s 1+0 records in 1+0 records out 12288 bytes (12 kB) copied, 0.000614537 s, 20.0 MB/s 1+0 records in 1+0 records out 24576 bytes (25 kB) copied, 0.00843834 s, 2.9 MB/s 1+0 records in 1+0 records out 24576 bytes (25 kB) copied, 0.00262205 s, 9.4 MB/s 1+0 records in 1+0 records out 24576 bytes (25 kB) copied, 0.000629046 s, 39.1 MB/s 1+0 records in 1+0 records out 24576 bytes (25 kB) copied, 0.000595087 s, 41.3 MB/s 1+0 records in 1+0 records out 49152 bytes (49 kB) copied, 0.00934586 s, 5.3 MB/s 1+0 records in 1+0 records out 49152 bytes (49 kB) copied, 0.00374667 s, 13.1 MB/s 1+0 records in 1+0 records out 49152 bytes (49 kB) copied, 0.00068572 s, 71.7 MB/s 1+0 records in 1+0 records out 49152 bytes (49 kB) copied, 0.000632233 s, 77.7 MB/s 1+0 records in 1+0 records out 98304 bytes (98 kB) copied, 0.0117136 s, 8.4 MB/s 1+0 records in 1+0 records out 98304 bytes (98 kB) copied, 0.00600628 s, 16.4 MB/s 1+0 records in 1+0 records out 98304 bytes (98 kB) copied, 0.000902064 s, 109 MB/s 1+0 records in 1+0 records out 98304 bytes (98 kB) copied, 0.000765363 s, 128 MB/s 1+0 records in 1+0 records out 196608 bytes (197 kB) copied, 0.0166047 s, 11.8 MB/s 1+0 records in 1+0 records out 196608 bytes (197 kB) copied, 0.0104212 s, 18.9 MB/s 1+0 records in 1+0 records out 196608 bytes (197 kB) copied, 0.00102112 s, 193 MB/s 1+0 records in 1+0 records out 196608 bytes (197 kB) copied, 0.000940231 s, 209 MB/s 1+0 records in 1+0 records out 393216 bytes (393 kB) copied, 0.0249606 s, 15.8 MB/s 1+0 records in 1+0 records out 393216 bytes (393 kB) copied, 0.0189614 s, 20.7 MB/s 1+0 records in 1+0 records out 393216 bytes (393 kB) copied, 0.00137327 s, 286 MB/s 1+0 records in 1+0 records out 393216 bytes (393 kB) copied, 0.00140946 s, 279 MB/s 1+0 records in 1+0 records out 786432 bytes (786 kB) copied, 0.0401892 s, 19.6 MB/s 1+0 records in 1+0 records out 786432 bytes (786 kB) copied, 0.0358471 s, 21.9 MB/s 1+0 records in 1+0 records out 786432 bytes (786 kB) copied, 0.00243035 s, 324 MB/s 1+0 records in 1+0 records out 786432 bytes (786 kB) copied, 0.00234218 s, 336 MB/s 1+0 records in 1+0 records out 1572864 bytes (1.6 MB) copied, 0.0617147 s, 25.5 MB/s 1+0 records in 1+0 records out 1572864 bytes (1.6 MB) copied, 0.0625218 s, 25.2 MB/s 1+0 records in 1+0 records out 1572864 bytes (1.6 MB) copied, 0.00441481 s, 356 MB/s 1+0 records in 1+0 records out 1572864 bytes (1.6 MB) copied, 0.00391219 s, 402 MB/s 1+0 records in 1+0 records out 3145728 bytes (3.1 MB) copied, 0.0958281 s, 32.8 MB/s 1+0 records in 1+0 records out 3145728 bytes (3.1 MB) copied, 0.161999 s, 19.4 MB/s 1+0 records in 1+0 records out 3145728 bytes (3.1 MB) copied, 0.00732845 s, 429 MB/s 1+0 records in 1+0 records out 3145728 bytes (3.1 MB) copied, 0.00598598 s, 526 MB/s 1+0 records in 1+0 records out 6291456 bytes (6.3 MB) copied, 0.179793 s, 35.0 MB/s 1+0 records in 1+0 records out 6291456 bytes (6.3 MB) copied, 0.188159 s, 33.4 MB/s 1+0 records in 1+0 records out 6291456 bytes (6.3 MB) copied, 0.00592004 s, 1.1 GB/s 1+0 records in 1+0 records out 6291456 bytes (6.3 MB) copied, 0.00438651 s, 1.4 GB/s 1+0 records in 1+0 records out 12582912 bytes (13 MB) copied, 0.30501 s, 41.3 MB/s 1+0 records in 1+0 records out 12582912 bytes (13 MB) copied, 0.334135 s, 37.7 MB/s 1+0 records in 1+0 records out 12582912 bytes (13 MB) copied, 0.013312 s, 945 MB/s 1+0 records in 1+0 records out 12582912 bytes (13 MB) copied, 0.0123263 s, 1.0 GB/s 1+0 records in 1+0 records out 25165824 bytes (25 MB) copied, 0.661853 s, 38.0 MB/s 1+0 records in 1+0 records out 25165824 bytes (25 MB) copied, 0.588317 s, 42.8 MB/s 1+0 records in 1+0 records out 25165824 bytes (25 MB) copied, 0.0194686 s, 1.3 GB/s 1+0 records in 1+0 records out 25165824 bytes (25 MB) copied, 0.0172649 s, 1.5 GB/s 1+0 records in 1+0 records out 50331648 bytes (50 MB) copied, 1.19525 s, 42.1 MB/s 1+0 records in 1+0 records out 50331648 bytes (50 MB) copied, 1.12953 s, 44.6 MB/s 1+0 records in 1+0 records out 50331648 bytes (50 MB) copied, 0.0321794 s, 1.6 GB/s 1+0 records in 1+0 records out 50331648 bytes (50 MB) copied, 0.0305597 s, 1.6 GB/s llite.lustre-ffff8800a9693000.extents_stats= snapshot_time: 1713405864.887905699 secs.nsecs start_time: 1713405857.156642759 secs.nsecs elapsed_time: 7.731262940 secs.nsecs read | write extents calls % cum% | calls % cum% 0K - 4K : 2 6 6 | 2 6 6 4K - 8K : 2 6 13 | 2 6 13 8K - 16K : 2 6 20 | 2 6 20 16K - 32K : 2 6 26 | 2 6 26 32K - 64K : 2 6 33 | 2 6 33 64K - 128K : 2 6 40 | 2 6 40 128K - 256K : 2 6 46 | 2 6 46 256K - 512K : 2 6 53 | 2 6 53 512K - 1024K : 2 6 60 | 2 6 60 1M - 2M : 2 6 66 | 2 6 66 2M - 4M : 2 6 73 | 2 6 73 4M - 8M : 2 6 80 | 2 6 80 8M - 16M : 2 6 86 | 2 6 86 16M - 32M : 2 6 93 | 2 6 93 32M - 64M : 2 6 100 | 2 6 100 llite.lustre-ffff8800a9693000.extents_stats=c 1+0 records in 1+0 records out 524288 bytes (524 kB) copied, 0.0627 s, 8.4 MB/s llite.lustre-ffff8800a9693000.extents_stats= snapshot_time: 1713405865.445880846 secs.nsecs start_time: 1713405865.253309354 secs.nsecs elapsed_time: 0.192571492 secs.nsecs read | write extents calls % cum% | calls % cum% 0K - 4K : 0 0 0 | 0 0 0 4K - 8K : 256 100 100 | 128 99 99 8K - 16K : 0 0 100 | 0 0 99 16K - 32K : 0 0 100 | 0 0 99 32K - 64K : 0 0 100 | 0 0 99 64K - 128K : 0 0 100 | 0 0 99 128K - 256K : 0 0 100 | 0 0 99 256K - 512K : 0 0 100 | 0 0 99 512K - 1024K : 0 0 100 | 1 0 100 llite.lustre-ffff8800a9693000.extents_stats=0 PASS 127c (11s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 128: interactive lfs for 2 consecutive find's ========================================================== 22:04:29 (1713405869) lfs: failed for 'find': No such file or directory /mnt/lustre/f128.sanity PASS 128 (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 129: test directory size limit ================================================================================== 22:04:34 (1713405874) pdsh@oleg146-client: oleg146-server: ssh exited with exit code 1 mcreate: cannot create `/mnt/lustre/d129.sanity/file_base_471' with mode 0100644: No space left on device pdsh@oleg146-client: oleg146-server: ssh exited with exit code 1 rc=28 returned as expected after 471 files total: 5 open/close in 0.06 seconds: 86.07 ops/second [ 4273.228705] Lustre: 13476:0:(osd_handler.c:624:osd_ldiskfs_add_entry()) lustre-MDT0000: directory (inode: 513366, FID: [0x20000040c:0x826c:0x0]) is approaching max size limit [ 4273.744871] Lustre: 13476:0:(osd_handler.c:624:osd_ldiskfs_add_entry()) lustre-MDT0000: directory (inode: 513366, FID: [0x20000040c:0x826c:0x0]) is approaching max size limit [ 4278.458841] Lustre: 13476:0:(osd_handler.c:620:osd_ldiskfs_add_entry()) lustre-MDT0000: directory (inode: 513366, FID: [0x20000040c:0x826c:0x0]) has reached max size limit [ 4278.458841] Lustre: 13476:0:(osd_handler.c:620:osd_ldiskfs_add_entry()) lustre-MDT0000: directory (inode: 513366, FID: [0x20000040c:0x826c:0x0]) has reached max size limit - unlinked 0 (time 1713405893 ; total 0 ; last 0) unlink(/mnt/lustre/d129.sanity/file_base_471) error: No such file or directory total: 471 unlinks in 2 seconds: 235.500000 unlinks/second pdsh@oleg146-client: oleg146-server: ssh exited with exit code 1 PASS 129 (24s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 130a: FIEMAP (1-stripe file) ============== 22:05:01 (1713405901) 1+0 records in 1+0 records out 65536 bytes (66 kB) copied, 0.00808019 s, 8.1 MB/s Filesystem type is: bd00bd0 File size of /mnt/lustre/f130a.sanity is 65536 (64 blocks of 1024 bytes) ext: device_logical: physical_offset: length: dev: flags: 0: 0.. 63: 538112.. 538175: 64: 0001: last,net,eof /mnt/lustre/f130a.sanity: 1 extent found FIEMAP on single striped file succeeded PASS 130a (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 130b: FIEMAP (2-stripe file) ============== 22:05:06 (1713405906) 2+0 records in 2+0 records out 2097152 bytes (2.1 MB) copied, 0.0952279 s, 22.0 MB/s Filesystem type is: bd00bd0 File size of /mnt/lustre/f130b.sanity is 2097152 (2048 blocks of 1024 bytes) ext: device_logical: physical_offset: length: dev: flags: 0: 0.. 1023: 2883584.. 2884607: 1024: 0001: net 1: 0.. 1023: 659456.. 660479: 1024: 0000: last,net /mnt/lustre/f130b.sanity: 2 extents found FIEMAP on 2-stripe file succeeded PASS 130b (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 130c: FIEMAP (2-stripe file with hole) ==== 22:05:11 (1713405911) 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0565832 s, 18.5 MB/s Filesystem type is: bd00bd0 File size of /mnt/lustre/f130c.sanity is 2097152 (2048 blocks of 1024 bytes) ext: device_logical: physical_offset: length: dev: flags: 0: 512.. 1023: 580604.. 581115: 512: 0000: net 1: 512.. 1023: 578740.. 579251: 512: 0001: last,net /mnt/lustre/f130c.sanity: 2 extents found FIEMAP on 2-stripe file with hole succeeded PASS 130c (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 130d: FIEMAP (N-stripe file) ============== 22:05:16 (1713405916) SKIP: sanity test_130d needs >= 3 OSTs SKIP 130d (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 130e: FIEMAP (test continuation FIEMAP calls) ========================================================== 22:05:19 (1713405919) Filesystem type is: bd00bd0 File size of /mnt/lustre/f130e.sanity is 67043328 (65472 blocks of 1024 bytes) ext: device_logical: physical_offset: length: dev: flags: 0: 0.. 63: 556780.. 556843: 64: 0001: net 1: 128.. 191: 556844.. 556907: 64: 0001: net 2: 256.. 319: 556908.. 556971: 64: 0001: net 3: 384.. 447: 556972.. 557035: 64: 0001: net 4: 512.. 575: 584696.. 584759: 64: 0001: net 5: 640.. 703: 584760.. 584823: 64: 0001: net 6: 768.. 831: 584824.. 584887: 64: 0001: net 7: 896.. 959: 584888.. 584951: 64: 0001: net 8: 1024.. 1087: 584952.. 585015: 64: 0001: net 9: 1152.. 1215: 585016.. 585079: 64: 0001: net 10: 1280.. 1343: 585080.. 585143: 64: 0001: net 11: 1408.. 1471: 585144.. 585207: 64: 0001: net 12: 1536.. 1599: 585208.. 585271: 64: 0001: net 13: 1664.. 1727: 585272.. 585335: 64: 0001: net 14: 1792.. 1855: 585336.. 585399: 64: 0001: net 15: 1920.. 1983: 585400.. 585463: 64: 0001: net 16: 2048.. 2111: 585464.. 585527: 64: 0001: net 17: 2176.. 2239: 585528.. 585591: 64: 0001: net 18: 2304.. 2367: 585592.. 585655: 64: 0001: net 19: 2432.. 2495: 585656.. 585719: 64: 0001: net 20: 2560.. 2623: 585728.. 585791: 64: 0001: net 21: 2688.. 2751: 585792.. 585855: 64: 0001: net 22: 2816.. 2879: 585856.. 585919: 64: 0001: net 23: 2944.. 3007: 585920.. 585983: 64: 0001: net 24: 3072.. 3135: 585984.. 586047: 64: 0001: net 25: 3200.. 3263: 586048.. 586111: 64: 0001: net 26: 3328.. 3391: 586112.. 586175: 64: 0001: net 27: 3456.. 3519: 586176.. 586239: 64: 0001: net 28: 3584.. 3647: 586240.. 586303: 64: 0001: net 29: 3712.. 3775: 586304.. 586367: 64: 0001: net 30: 3840.. 3903: 586368.. 586431: 64: 0001: net 31: 3968.. 4031: 586432.. 586495: 64: 0001: net 32: 4096.. 4159: 2883648.. 2883711: 64: 0001: net 33: 4224.. 4287: 2883776.. 2883839: 64: 0001: net 34: 4352.. 4415: 2883904.. 2883967: 64: 0001: net 35: 4480.. 4543: 2884032.. 2884095: 64: 0001: net 36: 4608.. 4671: 2884160.. 2884223: 64: 0001: net 37: 4736.. 4799: 2884288.. 2884351: 64: 0001: net 38: 4864.. 4927: 2884416.. 2884479: 64: 0001: net 39: 4992.. 5055: 2884544.. 2884607: 64: 0001: net 40: 5120.. 5183: 2884672.. 2884735: 64: 0001: net 41: 5248.. 5311: 2884800.. 2884863: 64: 0001: net 42: 5376.. 5439: 2884928.. 2884991: 64: 0001: net 43: 5504.. 5567: 2885056.. 2885119: 64: 0001: net 44: 5632.. 5695: 2885184.. 2885247: 64: 0001: net 45: 5760.. 5823: 2885312.. 2885375: 64: 0001: net 46: 5888.. 5951: 2885440.. 2885503: 64: 0001: net 47: 6016.. 6079: 2885568.. 2885631: 64: 0001: net 48: 6144.. 6207: 2885696.. 2885759: 64: 0001: net 49: 6272.. 6335: 2885824.. 2885887: 64: 0001: net 50: 6400.. 6463: 2885952.. 2886015: 64: 0001: net 51: 6528.. 6591: 2886080.. 2886143: 64: 0001: net 52: 6656.. 6719: 2886208.. 2886271: 64: 0001: net 53: 6784.. 6847: 2886336.. 2886399: 64: 0001: net 54: 6912.. 6975: 2886464.. 2886527: 64: 0001: net 55: 7040.. 7103: 2886592.. 2886655: 64: 0001: net 56: 7168.. 7231: 2886720.. 2886783: 64: 0001: net 57: 7296.. 7359: 2886848.. 2886911: 64: 0001: net 58: 7424.. 7487: 2886976.. 2887039: 64: 0001: net 59: 7552.. 7615: 2887104.. 2887167: 64: 0001: net 60: 7680.. 7743: 2887232.. 2887295: 64: 0001: net 61: 7808.. 7871: 2887360.. 2887423: 64: 0001: net 62: 7936.. 7999: 2887488.. 2887551: 64: 0001: net 63: 8064.. 8127: 2887616.. 2887679: 64: 0001: net 64: 8192.. 8255: 2891776.. 2891839: 64: 0001: net 65: 8320.. 8383: 2891904.. 2891967: 64: 0001: net 66: 8448.. 8511: 2892032.. 2892095: 64: 0001: net 67: 8576.. 8639: 2892160.. 2892223: 64: 0001: net 68: 8704.. 8767: 2892288.. 2892351: 64: 0001: net 69: 8832.. 8895: 2892416.. 2892479: 64: 0001: net 70: 8960.. 9023: 2892544.. 2892607: 64: 0001: net 71: 9088.. 9151: 2892672.. 2892735: 64: 0001: net 72: 9216.. 9279: 2892800.. 2892863: 64: 0001: net 73: 9344.. 9407: 2892928.. 2892991: 64: 0001: net 74: 9472.. 9535: 2893056.. 2893119: 64: 0001: net 75: 9600.. 9663: 2893184.. 2893247: 64: 0001: net 76: 9728.. 9791: 2893312.. 2893375: 64: 0001: net 77: 9856.. 9919: 2893440.. 2893503: 64: 0001: net 78: 9984.. 10047: 2893568.. 2893631: 64: 0001: net 79: 10112.. 10175: 2893696.. 2893759: 64: 0001: net 80: 10240.. 10303: 2893824.. 2893887: 64: 0001: net 81: 10368.. 10431: 2893952.. 2894015: 64: 0001: net 82: 10496.. 10559: 2894080.. 2894143: 64: 0001: net 83: 10624.. 10687: 2894208.. 2894271: 64: 0001: net 84: 10752.. 10815: 2894336.. 2894399: 64: 0001: net 85: 10880.. 10943: 2894464.. 2894527: 64: 0001: net 86: 11008.. 11071: 2894592.. 2894655: 64: 0001: net 87: 11136.. 11199: 2894720.. 2894783: 64: 0001: net 88: 11264.. 11327: 2894848.. 2894911: 64: 0001: net 89: 11392.. 11455: 2894976.. 2895039: 64: 0001: net 90: 11520.. 11583: 2895104.. 2895167: 64: 0001: net 91: 11648.. 11711: 2895232.. 2895295: 64: 0001: net 92: 11776.. 11839: 2895360.. 2895423: 64: 0001: net 93: 11904.. 11967: 2895488.. 2895551: 64: 0001: net 94: 12032.. 12095: 2895616.. 2895679: 64: 0001: net 95: 12160.. 12223: 2895744.. 2895807: 64: 0001: net 96: 12288.. 12351: 2895872.. 2895935: 64: 0001: net 97: 12416.. 12479: 2896000.. 2896063: 64: 0001: net 98: 12544.. 12607: 2896128.. 2896191: 64: 0001: net 99: 12672.. 12735: 2896256.. 2896319: 64: 0001: net 100: 12800.. 12863: 2896384.. 2896447: 64: 0001: net 101: 12928.. 12991: 2896512.. 2896575: 64: 0001: net 102: 13056.. 13119: 2896640.. 2896703: 64: 0001: net 103: 13184.. 13247: 2896768.. 2896831: 64: 0001: net 104: 13312.. 13375: 2896896.. 2896959: 64: 0001: net 105: 13440.. 13503: 2897024.. 2897087: 64: 0001: net 106: 13568.. 13631: 2897152.. 2897215: 64: 0001: net 107: 13696.. 13759: 2897280.. 2897343: 64: 0001: net 108: 13824.. 13887: 2897408.. 2897471: 64: 0001: net 109: 13952.. 14015: 2897536.. 2897599: 64: 0001: net 110: 14080.. 14143: 2897664.. 2897727: 64: 0001: net 111: 14208.. 14271: 2897792.. 2897855: 64: 0001: net 112: 14336.. 14399: 2897920.. 2897983: 64: 0001: net 113: 14464.. 14527: 2898048.. 2898111: 64: 0001: net 114: 14592.. 14655: 2898176.. 2898239: 64: 0001: net 115: 14720.. 14783: 2898304.. 2898367: 64: 0001: net 116: 14848.. 14911: 2898432.. 2898495: 64: 0001: net 117: 14976.. 15039: 2898560.. 2898623: 64: 0001: net 118: 15104.. 15167: 2898688.. 2898751: 64: 0001: net 119: 15232.. 15295: 2898816.. 2898879: 64: 0001: net 120: 15360.. 15423: 2898944.. 2899007: 64: 0001: net 121: 15488.. 15551: 2899072.. 2899135: 64: 0001: net 122: 15616.. 15679: 2899200.. 2899263: 64: 0001: net 123: 15744.. 15807: 2899328.. 2899391: 64: 0001: net 124: 15872.. 15935: 2899456.. 2899519: 64: 0001: net 125: 16000.. 16063: 2899584.. 2899647: 64: 0001: net 126: 16128.. 16191: 2899712.. 2899775: 64: 0001: net 127: 16256.. 16319: 2899840.. 2899903: 64: 0001: net 128: 16384.. 16447: 2899968.. 2900031: 64: 0001: net 129: 16512.. 16575: 2900096.. 2900159: 64: 0001: net 130: 16640.. 16703: 2900224.. 2900287: 64: 0001: net 131: 16768.. 16831: 2900352.. 2900415: 64: 0001: net 132: 16896.. 16959: 2900480.. 2900543: 64: 0001: net 133: 17024.. 17087: 2900608.. 2900671: 64: 0001: net 134: 17152.. 17215: 2900736.. 2900799: 64: 0001: net 135: 17280.. 17343: 2900864.. 2900927: 64: 0001: net 136: 17408.. 17471: 2900992.. 2901055: 64: 0001: net 137: 17536.. 17599: 2901120.. 2901183: 64: 0001: net 138: 17664.. 17727: 2901248.. 2901311: 64: 0001: net 139: 17792.. 17855: 2901376.. 2901439: 64: 0001: net 140: 17920.. 17983: 2901504.. 2901567: 64: 0001: net 141: 18048.. 18111: 2901632.. 2901695: 64: 0001: net 142: 18176.. 18239: 2901760.. 2901823: 64: 0001: net 143: 18304.. 18367: 2901888.. 2901951: 64: 0001: net 144: 18432.. 18495: 2902016.. 2902079: 64: 0001: net 145: 18560.. 18623: 2902144.. 2902207: 64: 0001: net 146: 18688.. 18751: 2902272.. 2902335: 64: 0001: net 147: 18816.. 18879: 2902400.. 2902463: 64: 0001: net 148: 18944.. 19007: 2902528.. 2902591: 64: 0001: net 149: 19072.. 19135: 2902656.. 2902719: 64: 0001: net 150: 19200.. 19263: 2902784.. 2902847: 64: 0001: net 151: 19328.. 19391: 2902912.. 2902975: 64: 0001: net 152: 19456.. 19519: 2903040.. 2903103: 64: 0001: net 153: 19584.. 19647: 2903168.. 2903231: 64: 0001: net 154: 19712.. 19775: 2903296.. 2903359: 64: 0001: net 155: 19840.. 19903: 2903424.. 2903487: 64: 0001: net 156: 19968.. 20031: 2903552.. 2903615: 64: 0001: net 157: 20096.. 20159: 2903680.. 2903743: 64: 0001: net 158: 20224.. 20287: 2903808.. 2903871: 64: 0001: net 159: 20352.. 20415: 2903936.. 2903999: 64: 0001: net 160: 20480.. 20543: 2904064.. 2904127: 64: 0001: net 161: 20608.. 20671: 2904192.. 2904255: 64: 0001: net 162: 20736.. 20799: 2904320.. 2904383: 64: 0001: net 163: 20864.. 20927: 2904448.. 2904511: 64: 0001: net 164: 20992.. 21055: 2904576.. 2904639: 64: 0001: net 165: 21120.. 21183: 2904704.. 2904767: 64: 0001: net 166: 21248.. 21311: 2904832.. 2904895: 64: 0001: net 167: 21376.. 21439: 2904960.. 2905023: 64: 0001: net 168: 21504.. 21567: 2905088.. 2905151: 64: 0001: net 169: 21632.. 21695: 2905216.. 2905279: 64: 0001: net 170: 21760.. 21823: 2905344.. 2905407: 64: 0001: net 171: 21888.. 21951: 2905472.. 2905535: 64: 0001: net 172: 22016.. 22079: 2905600.. 2905663: 64: 0001: net 173: 22144.. 22207: 2905728.. 2905791: 64: 0001: net 174: 22272.. 22335: 2905856.. 2905919: 64: 0001: net 175: 22400.. 22463: 2905984.. 2906047: 64: 0001: net 176: 22528.. 22591: 2906112.. 2906175: 64: 0001: net 177: 22656.. 22719: 2906240.. 2906303: 64: 0001: net 178: 22784.. 22847: 2906368.. 2906431: 64: 0001: net 179: 22912.. 22975: 2906496.. 2906559: 64: 0001: net 180: 23040.. 23103: 2906624.. 2906687: 64: 0001: net 181: 23168.. 23231: 2906752.. 2906815: 64: 0001: net 182: 23296.. 23359: 2906880.. 2906943: 64: 0001: net 183: 23424.. 23487: 2907008.. 2907071: 64: 0001: net 184: 23552.. 23615: 2907136.. 2907199: 64: 0001: net 185: 23680.. 23743: 2907264.. 2907327: 64: 0001: net 186: 23808.. 23871: 2907392.. 2907455: 64: 0001: net 187: 23936.. 23999: 2907520.. 2907583: 64: 0001: net 188: 24064.. 24127: 2907648.. 2907711: 64: 0001: net 189: 24192.. 24255: 2907776.. 2907839: 64: 0001: net 190: 24320.. 24383: 2907904.. 2907967: 64: 0001: net 191: 24448.. 24511: 2908032.. 2908095: 64: 0001: net 192: 24576.. 24639: 2908160.. 2908223: 64: 0001: net 193: 24704.. 24767: 2908288.. 2908351: 64: 0001: net 194: 24832.. 24895: 2908416.. 2908479: 64: 0001: net 195: 24960.. 25023: 2908544.. 2908607: 64: 0001: net 196: 25088.. 25151: 2908672.. 2908735: 64: 0001: net 197: 25216.. 25279: 2908800.. 2908863: 64: 0001: net 198: 25344.. 25407: 2908928.. 2908991: 64: 0001: net 199: 25472.. 25535: 2909056.. 2909119: 64: 0001: net 200: 25600.. 25663: 2909184.. 2909247: 64: 0001: net 201: 25728.. 25791: 2909312.. 2909375: 64: 0001: net 202: 25856.. 25919: 2909440.. 2909503: 64: 0001: net 203: 25984.. 26047: 2909568.. 2909631: 64: 0001: net 204: 26112.. 26175: 2909696.. 2909759: 64: 0001: net 205: 26240.. 26303: 2909824.. 2909887: 64: 0001: net 206: 26368.. 26431: 2909952.. 2910015: 64: 0001: net 207: 26496.. 26559: 2910080.. 2910143: 64: 0001: net 208: 26624.. 26687: 2910208.. 2910271: 64: 0001: net 209: 26752.. 26815: 2910336.. 2910399: 64: 0001: net 210: 26880.. 26943: 2910464.. 2910527: 64: 0001: net 211: 27008.. 27071: 2910592.. 2910655: 64: 0001: net 212: 27136.. 27199: 2910720.. 2910783: 64: 0001: net 213: 27264.. 27327: 2910848.. 2910911: 64: 0001: net 214: 27392.. 27455: 2910976.. 2911039: 64: 0001: net 215: 27520.. 27583: 2911104.. 2911167: 64: 0001: net 216: 27648.. 27711: 2911232.. 2911295: 64: 0001: net 217: 27776.. 27839: 2911360.. 2911423: 64: 0001: net 218: 27904.. 27967: 2911488.. 2911551: 64: 0001: net 219: 28032.. 28095: 2911616.. 2911679: 64: 0001: net 220: 28160.. 28223: 2911744.. 2911807: 64: 0001: net 221: 28288.. 28351: 2911872.. 2911935: 64: 0001: net 222: 28416.. 28479: 2912000.. 2912063: 64: 0001: net 223: 28544.. 28607: 2912128.. 2912191: 64: 0001: net 224: 28672.. 28735: 2912256.. 2912319: 64: 0001: net 225: 28800.. 28863: 2912384.. 2912447: 64: 0001: net 226: 28928.. 28991: 2912512.. 2912575: 64: 0001: net 227: 29056.. 29119: 2912640.. 2912703: 64: 0001: net 228: 29184.. 29247: 2912768.. 2912831: 64: 0001: net 229: 29312.. 29375: 2912896.. 2912959: 64: 0001: net 230: 29440.. 29503: 2913024.. 2913087: 64: 0001: net 231: 29568.. 29631: 2913152.. 2913215: 64: 0001: net 232: 29696.. 29759: 2913280.. 2913343: 64: 0001: net 233: 29824.. 29887: 2913408.. 2913471: 64: 0001: net 234: 29952.. 30015: 2913536.. 2913599: 64: 0001: net 235: 30080.. 30143: 2913664.. 2913727: 64: 0001: net 236: 30208.. 30271: 2913792.. 2913855: 64: 0001: net 237: 30336.. 30399: 2913920.. 2913983: 64: 0001: net 238: 30464.. 30527: 2914048.. 2914111: 64: 0001: net 239: 30592.. 30655: 2914176.. 2914239: 64: 0001: net 240: 30720.. 30783: 2914304.. 2914367: 64: 0001: net 241: 30848.. 30911: 2914432.. 2914495: 64: 0001: net 242: 30976.. 31039: 2914560.. 2914623: 64: 0001: net 243: 31104.. 31167: 2914688.. 2914751: 64: 0001: net 244: 31232.. 31295: 2914816.. 2914879: 64: 0001: net 245: 31360.. 31423: 2914944.. 2915007: 64: 0001: net 246: 31488.. 31551: 2915072.. 2915135: 64: 0001: net 247: 31616.. 31679: 2915200.. 2915263: 64: 0001: net 248: 31744.. 31807: 2915328.. 2915391: 64: 0001: net 249: 31872.. 31935: 2915456.. 2915519: 64: 0001: net 250: 32000.. 32063: 2915584.. 2915647: 64: 0001: net 251: 32128.. 32191: 2915712.. 2915775: 64: 0001: net 252: 32256.. 32319: 2915840.. 2915903: 64: 0001: net 253: 32384.. 32447: 2915968.. 2916031: 64: 0001: net 254: 32512.. 32575: 2916096.. 2916159: 64: 0001: net 255: 32640.. 32703: 2916224.. 2916287: 64: 0001: net 256: 0.. 63: 594928.. 594991: 64: 0000: net 257: 128.. 191: 594992.. 595055: 64: 0000: net 258: 256.. 319: 595056.. 595119: 64: 0000: net 259: 384.. 447: 595120.. 595183: 64: 0000: net 260: 512.. 575: 595184.. 595247: 64: 0000: net 261: 640.. 703: 595248.. 595311: 64: 0000: net 262: 768.. 831: 595312.. 595375: 64: 0000: net 263: 896.. 959: 595376.. 595439: 64: 0000: net 264: 1024.. 1087: 595440.. 595503: 64: 0000: net 265: 1152.. 1215: 595504.. 595567: 64: 0000: net 266: 1280.. 1343: 595568.. 595631: 64: 0000: net 267: 1408.. 1471: 595632.. 595695: 64: 0000: net 268: 1536.. 1599: 595696.. 595759: 64: 0000: net 269: 1664.. 1727: 595760.. 595823: 64: 0000: net 270: 1792.. 1855: 595824.. 595887: 64: 0000: net 271: 1920.. 1983: 595888.. 595951: 64: 0000: net 272: 2048.. 2111: 613612.. 613675: 64: 0000: net 273: 2176.. 2239: 613676.. 613739: 64: 0000: net 274: 2304.. 2367: 613740.. 613803: 64: 0000: net 275: 2432.. 2495: 613804.. 613867: 64: 0000: net 276: 2560.. 2623: 613868.. 613931: 64: 0000: net 277: 2688.. 2751: 613932.. 613995: 64: 0000: net 278: 2816.. 2879: 613996.. 614059: 64: 0000: net 279: 2944.. 3007: 614060.. 614123: 64: 0000: net 280: 3072.. 3135: 614124.. 614187: 64: 0000: net 281: 3200.. 3263: 614188.. 614251: 64: 0000: net 282: 3328.. 3391: 614252.. 614315: 64: 0000: net 283: 3456.. 3519: 614316.. 614379: 64: 0000: net 284: 3584.. 3647: 595968.. 596031: 64: 0000: net 285: 3712.. 3775: 596032.. 596095: 64: 0000: net 286: 3840.. 3903: 596096.. 596159: 64: 0000: net 287: 3968.. 4031: 596160.. 596223: 64: 0000: net 288: 4096.. 4159: 659520.. 659583: 64: 0000: net 289: 4224.. 4287: 659648.. 659711: 64: 0000: net 290: 4352.. 4415: 659776.. 659839: 64: 0000: net 291: 4480.. 4543: 659904.. 659967: 64: 0000: net 292: 4608.. 4671: 660032.. 660095: 64: 0000: net 293: 4736.. 4799: 660160.. 660223: 64: 0000: net 294: 4864.. 4927: 660288.. 660351: 64: 0000: net 295: 4992.. 5055: 660416.. 660479: 64: 0000: net 296: 5120.. 5183: 660544.. 660607: 64: 0000: net 297: 5248.. 5311: 660672.. 660735: 64: 0000: net 298: 5376.. 5439: 660800.. 660863: 64: 0000: net 299: 5504.. 5567: 660928.. 660991: 64: 0000: net 300: 5632.. 5695: 661056.. 661119: 64: 0000: net 301: 5760.. 5823: 661184.. 661247: 64: 0000: net 302: 5888.. 5951: 661312.. 661375: 64: 0000: net 303: 6016.. 6079: 661440.. 661503: 64: 0000: net 304: 6144.. 6207: 661568.. 661631: 64: 0000: net 305: 6272.. 6335: 661696.. 661759: 64: 0000: net 306: 6400.. 6463: 661824.. 661887: 64: 0000: net 307: 6528.. 6591: 661952.. 662015: 64: 0000: net 308: 6656.. 6719: 662080.. 662143: 64: 0000: net 309: 6784.. 6847: 662208.. 662271: 64: 0000: net 310: 6912.. 6975: 662336.. 662399: 64: 0000: net 311: 7040.. 7103: 662464.. 662527: 64: 0000: net 312: 7168.. 7231: 662592.. 662655: 64: 0000: net 313: 7296.. 7359: 662720.. 662783: 64: 0000: net 314: 7424.. 7487: 662848.. 662911: 64: 0000: net 315: 7552.. 7615: 662976.. 663039: 64: 0000: net 316: 7680.. 7743: 663104.. 663167: 64: 0000: net 317: 7808.. 7871: 663232.. 663295: 64: 0000: net 318: 7936.. 7999: 663360.. 663423: 64: 0000: net 319: 8064.. 8127: 663488.. 663551: 64: 0000: net 320: 8192.. 8255: 671744.. 671807: 64: 0000: net 321: 8320.. 8383: 671872.. 671935: 64: 0000: net 322: 8448.. 8511: 672000.. 672063: 64: 0000: net 323: 8576.. 8639: 672128.. 672191: 64: 0000: net 324: 8704.. 8767: 672256.. 672319: 64: 0000: net 325: 8832.. 8895: 672384.. 672447: 64: 0000: net 326: 8960.. 9023: 672512.. 672575: 64: 0000: net 327: 9088.. 9151: 672640.. 672703: 64: 0000: net 328: 9216.. 9279: 672768.. 672831: 64: 0000: net 329: 9344.. 9407: 672896.. 672959: 64: 0000: net 330: 9472.. 9535: 673024.. 673087: 64: 0000: net 331: 9600.. 9663: 673152.. 673215: 64: 0000: net 332: 9728.. 9791: 673280.. 673343: 64: 0000: net 333: 9856.. 9919: 673408.. 673471: 64: 0000: net 334: 9984.. 10047: 673536.. 673599: 64: 0000: net 335: 10112.. 10175: 673664.. 673727: 64: 0000: net 336: 10240.. 10303: 673792.. 673855: 64: 0000: net 337: 10368.. 10431: 673920.. 673983: 64: 0000: net 338: 10496.. 10559: 674048.. 674111: 64: 0000: net 339: 10624.. 10687: 674176.. 674239: 64: 0000: net 340: 10752.. 10815: 674304.. 674367: 64: 0000: net 341: 10880.. 10943: 674432.. 674495: 64: 0000: net 342: 11008.. 11071: 674560.. 674623: 64: 0000: net 343: 11136.. 11199: 674688.. 674751: 64: 0000: net 344: 11264.. 11327: 674816.. 674879: 64: 0000: net 345: 11392.. 11455: 674944.. 675007: 64: 0000: net 346: 11520.. 11583: 675072.. 675135: 64: 0000: net 347: 11648.. 11711: 675200.. 675263: 64: 0000: net 348: 11776.. 11839: 675328.. 675391: 64: 0000: net 349: 11904.. 11967: 675456.. 675519: 64: 0000: net 350: 12032.. 12095: 675584.. 675647: 64: 0000: net 351: 12160.. 12223: 675712.. 675775: 64: 0000: net 352: 12288.. 12351: 675840.. 675903: 64: 0000: net 353: 12416.. 12479: 675968.. 676031: 64: 0000: net 354: 12544.. 12607: 676096.. 676159: 64: 0000: net 355: 12672.. 12735: 676224.. 676287: 64: 0000: net 356: 12800.. 12863: 676352.. 676415: 64: 0000: net 357: 12928.. 12991: 676480.. 676543: 64: 0000: net 358: 13056.. 13119: 676608.. 676671: 64: 0000: net 359: 13184.. 13247: 676736.. 676799: 64: 0000: net 360: 13312.. 13375: 676864.. 676927: 64: 0000: net 361: 13440.. 13503: 676992.. 677055: 64: 0000: net 362: 13568.. 13631: 677120.. 677183: 64: 0000: net 363: 13696.. 13759: 677248.. 677311: 64: 0000: net 364: 13824.. 13887: 677376.. 677439: 64: 0000: net 365: 13952.. 14015: 677504.. 677567: 64: 0000: net 366: 14080.. 14143: 677632.. 677695: 64: 0000: net 367: 14208.. 14271: 677760.. 677823: 64: 0000: net 368: 14336.. 14399: 677888.. 677951: 64: 0000: net 369: 14464.. 14527: 678016.. 678079: 64: 0000: net 370: 14592.. 14655: 678144.. 678207: 64: 0000: net 371: 14720.. 14783: 678272.. 678335: 64: 0000: net 372: 14848.. 14911: 678400.. 678463: 64: 0000: net 373: 14976.. 15039: 678528.. 678591: 64: 0000: net 374: 15104.. 15167: 678656.. 678719: 64: 0000: net 375: 15232.. 15295: 678784.. 678847: 64: 0000: net 376: 15360.. 15423: 678912.. 678975: 64: 0000: net 377: 15488.. 15551: 679040.. 679103: 64: 0000: net 378: 15616.. 15679: 679168.. 679231: 64: 0000: net 379: 15744.. 15807: 679296.. 679359: 64: 0000: net 380: 15872.. 15935: 679424.. 679487: 64: 0000: net 381: 16000.. 16063: 679552.. 679615: 64: 0000: net 382: 16128.. 16191: 679680.. 679743: 64: 0000: net 383: 16256.. 16319: 679808.. 679871: 64: 0000: net 384: 16384.. 16447: 679936.. 679999: 64: 0000: net 385: 16512.. 16575: 680064.. 680127: 64: 0000: net 386: 16640.. 16703: 680192.. 680255: 64: 0000: net 387: 16768.. 16831: 680320.. 680383: 64: 0000: net 388: 16896.. 16959: 680448.. 680511: 64: 0000: net 389: 17024.. 17087: 680576.. 680639: 64: 0000: net 390: 17152.. 17215: 680704.. 680767: 64: 0000: net 391: 17280.. 17343: 680832.. 680895: 64: 0000: net 392: 17408.. 17471: 680960.. 681023: 64: 0000: net 393: 17536.. 17599: 681088.. 681151: 64: 0000: net 394: 17664.. 17727: 681216.. 681279: 64: 0000: net 395: 17792.. 17855: 681344.. 681407: 64: 0000: net 396: 17920.. 17983: 681472.. 681535: 64: 0000: net 397: 18048.. 18111: 681600.. 681663: 64: 0000: net 398: 18176.. 18239: 681728.. 681791: 64: 0000: net 399: 18304.. 18367: 681856.. 681919: 64: 0000: net 400: 18432.. 18495: 681984.. 682047: 64: 0000: net 401: 18560.. 18623: 682112.. 682175: 64: 0000: net 402: 18688.. 18751: 682240.. 682303: 64: 0000: net 403: 18816.. 18879: 682368.. 682431: 64: 0000: net 404: 18944.. 19007: 682496.. 682559: 64: 0000: net 405: 19072.. 19135: 682624.. 682687: 64: 0000: net 406: 19200.. 19263: 682752.. 682815: 64: 0000: net 407: 19328.. 19391: 682880.. 682943: 64: 0000: net 408: 19456.. 19519: 683008.. 683071: 64: 0000: net 409: 19584.. 19647: 683136.. 683199: 64: 0000: net 410: 19712.. 19775: 683264.. 683327: 64: 0000: net 411: 19840.. 19903: 683392.. 683455: 64: 0000: net 412: 19968.. 20031: 683520.. 683583: 64: 0000: net 413: 20096.. 20159: 683648.. 683711: 64: 0000: net 414: 20224.. 20287: 683776.. 683839: 64: 0000: net 415: 20352.. 20415: 683904.. 683967: 64: 0000: net 416: 20480.. 20543: 684032.. 684095: 64: 0000: net 417: 20608.. 20671: 684160.. 684223: 64: 0000: net 418: 20736.. 20799: 684288.. 684351: 64: 0000: net 419: 20864.. 20927: 684416.. 684479: 64: 0000: net 420: 20992.. 21055: 684544.. 684607: 64: 0000: net 421: 21120.. 21183: 684672.. 684735: 64: 0000: net 422: 21248.. 21311: 684800.. 684863: 64: 0000: net 423: 21376.. 21439: 684928.. 684991: 64: 0000: net 424: 21504.. 21567: 685056.. 685119: 64: 0000: net 425: 21632.. 21695: 685184.. 685247: 64: 0000: net 426: 21760.. 21823: 685312.. 685375: 64: 0000: net 427: 21888.. 21951: 685440.. 685503: 64: 0000: net 428: 22016.. 22079: 685568.. 685631: 64: 0000: net 429: 22144.. 22207: 685696.. 685759: 64: 0000: net 430: 22272.. 22335: 685824.. 685887: 64: 0000: net 431: 22400.. 22463: 685952.. 686015: 64: 0000: net 432: 22528.. 22591: 686080.. 686143: 64: 0000: net 433: 22656.. 22719: 686208.. 686271: 64: 0000: net 434: 22784.. 22847: 686336.. 686399: 64: 0000: net 435: 22912.. 22975: 686464.. 686527: 64: 0000: net 436: 23040.. 23103: 686592.. 686655: 64: 0000: net 437: 23168.. 23231: 686720.. 686783: 64: 0000: net 438: 23296.. 23359: 686848.. 686911: 64: 0000: net 439: 23424.. 23487: 686976.. 687039: 64: 0000: net 440: 23552.. 23615: 687104.. 687167: 64: 0000: net 441: 23680.. 23743: 687232.. 687295: 64: 0000: net 442: 23808.. 23871: 687360.. 687423: 64: 0000: net 443: 23936.. 23999: 687488.. 687551: 64: 0000: net 444: 24064.. 24127: 687616.. 687679: 64: 0000: net 445: 24192.. 24255: 687744.. 687807: 64: 0000: net 446: 24320.. 24383: 687872.. 687935: 64: 0000: net 447: 24448.. 24511: 688000.. 688063: 64: 0000: net 448: 24576.. 24639: 688128.. 688191: 64: 0000: net 449: 24704.. 24767: 688256.. 688319: 64: 0000: net 450: 24832.. 24895: 688384.. 688447: 64: 0000: net 451: 24960.. 25023: 688512.. 688575: 64: 0000: net 452: 25088.. 25151: 688640.. 688703: 64: 0000: net 453: 25216.. 25279: 688768.. 688831: 64: 0000: net 454: 25344.. 25407: 688896.. 688959: 64: 0000: net 455: 25472.. 25535: 689024.. 689087: 64: 0000: net 456: 25600.. 25663: 689152.. 689215: 64: 0000: net 457: 25728.. 25791: 689280.. 689343: 64: 0000: net 458: 25856.. 25919: 689408.. 689471: 64: 0000: net 459: 25984.. 26047: 689536.. 689599: 64: 0000: net 460: 26112.. 26175: 689664.. 689727: 64: 0000: net 461: 26240.. 26303: 689792.. 689855: 64: 0000: net 462: 26368.. 26431: 689920.. 689983: 64: 0000: net 463: 26496.. 26559: 690048.. 690111: 64: 0000: net 464: 26624.. 26687: 690176.. 690239: 64: 0000: net 465: 26752.. 26815: 690304.. 690367: 64: 0000: net 466: 26880.. 26943: 690432.. 690495: 64: 0000: net 467: 27008.. 27071: 690560.. 690623: 64: 0000: net 468: 27136.. 27199: 690688.. 690751: 64: 0000: net 469: 27264.. 27327: 690816.. 690879: 64: 0000: net 470: 27392.. 27455: 690944.. 691007: 64: 0000: net 471: 27520.. 27583: 691072.. 691135: 64: 0000: net 472: 27648.. 27711: 691200.. 691263: 64: 0000: net 473: 27776.. 27839: 691328.. 691391: 64: 0000: net 474: 27904.. 27967: 691456.. 691519: 64: 0000: net 475: 28032.. 28095: 691584.. 691647: 64: 0000: net 476: 28160.. 28223: 691712.. 691775: 64: 0000: net 477: 28288.. 28351: 691840.. 691903: 64: 0000: net 478: 28416.. 28479: 691968.. 692031: 64: 0000: net 479: 28544.. 28607: 692096.. 692159: 64: 0000: net 480: 28672.. 28735: 692224.. 692287: 64: 0000: net 481: 28800.. 28863: 692352.. 692415: 64: 0000: net 482: 28928.. 28991: 692480.. 692543: 64: 0000: net 483: 29056.. 29119: 692608.. 692671: 64: 0000: net 484: 29184.. 29247: 692736.. 692799: 64: 0000: net 485: 29312.. 29375: 692864.. 692927: 64: 0000: net 486: 29440.. 29503: 692992.. 693055: 64: 0000: net 487: 29568.. 29631: 693120.. 693183: 64: 0000: net 488: 29696.. 29759: 693248.. 693311: 64: 0000: net 489: 29824.. 29887: 693376.. 693439: 64: 0000: net 490: 29952.. 30015: 693504.. 693567: 64: 0000: net 491: 30080.. 30143: 693632.. 693695: 64: 0000: net 492: 30208.. 30271: 693760.. 693823: 64: 0000: net 493: 30336.. 30399: 693888.. 693951: 64: 0000: net 494: 30464.. 30527: 694016.. 694079: 64: 0000: net 495: 30592.. 30655: 694144.. 694207: 64: 0000: net 496: 30720.. 30783: 694272.. 694335: 64: 0000: net 497: 30848.. 30911: 694400.. 694463: 64: 0000: net 498: 30976.. 31039: 694528.. 694591: 64: 0000: net 499: 31104.. 31167: 694656.. 694719: 64: 0000: net 500: 31232.. 31295: 694784.. 694847: 64: 0000: net 501: 31360.. 31423: 694912.. 694975: 64: 0000: net 502: 31488.. 31551: 695040.. 695103: 64: 0000: net 503: 31616.. 31679: 695168.. 695231: 64: 0000: net 504: 31744.. 31807: 695296.. 695359: 64: 0000: net 505: 31872.. 31935: 695424.. 695487: 64: 0000: net 506: 32000.. 32063: 695552.. 695615: 64: 0000: net 507: 32128.. 32191: 695680.. 695743: 64: 0000: net 508: 32256.. 32319: 695808.. 695871: 64: 0000: net 509: 32384.. 32447: 695936.. 695999: 64: 0000: net 510: 32512.. 32575: 696064.. 696127: 64: 0000: net 511: 32640.. 32703: 696192.. 696255: 64: 0000: last,net /mnt/lustre/f130e.sanity: 10 extents found FIEMAP with continuation calls succeeded PASS 130e (13s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 130f: FIEMAP (unstriped file) ============= 22:05:34 (1713405934) Filesystem type is: bd00bd0 File size of /mnt/lustre/f130f.sanity is 33554432 (32768 blocks of 1024 bytes) /mnt/lustre/f130f.sanity: 0 extents found PASS 130f (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 130g: FIEMAP (overstripe file) ============ 22:05:39 (1713405939) 200+0 records in 200+0 records out 209715200 bytes (210 MB) copied, 5.02441 s, 41.7 MB/s filefrag list 200 extents in file with stripecount 200 PASS 130g (11s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 131a: test iov's crossing stripe boundary for writev/readv ========================================================== 22:05:52 (1713405952) PASS 131a (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 131b: test append writev ================== 22:05:57 (1713405957) /mnt/lustre/f131b.sanity has type file OK /mnt/lustre/f131b.sanity has size 3145728 OK /mnt/lustre/f131b.sanity has type file OK /mnt/lustre/f131b.sanity has size 5767168 OK PASS 131b (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 131c: test read/write on file w/o objects ========================================================== 22:06:02 (1713405962) Write error: Bad file descriptor (rc = -1, len = 1048576) PASS 131c (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 131d: test short read ===================== 22:06:07 (1713405967) PASS 131d (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 131e: test read hitting hole ============== 22:06:12 (1713405972) PASS 131e (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 133a: Verifying MDT stats ================================================================================================== 22:06:16 (1713405976) mdt.lustre-MDT0000.rename_stats mdt.lustre-MDT0001.rename_stats mdt.lustre-MDT0000.md_stats=clear mdt.lustre-MDT0001.md_stats=clear obdfilter.lustre-OST0000.stats=clear obdfilter.lustre-OST0001.stats=clear mdt.lustre-MDT0000.md_stats=clear mdt.lustre-MDT0001.md_stats=clear /mnt/lustre/d133a.sanity/stats_testdir: total 0 -rw-r--r-- 1 root root 0 Apr 17 22:06 f133a.sanity PASS 133a (8s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 133b: Verifying extra MDT stats ============================================================================================ 22:06:26 (1713405986) mdt.lustre-MDT0000.md_stats=clear mdt.lustre-MDT0001.md_stats=clear obdfilter.lustre-OST0000.stats=clear obdfilter.lustre-OST0001.stats=clear mdt.lustre-MDT0000.md_stats=clear mdt.lustre-MDT0001.md_stats=clear PASS 133b (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 133c: Verifying OST stats ================================================================================================== 22:06:33 (1713405993) striped dir -i1 -c2 -H fnv_1a_64 /mnt/lustre/d133c.sanity/stats_testdir Waiting for MDT destroys to complete mdt.lustre-MDT0000.md_stats=clear mdt.lustre-MDT0001.md_stats=clear obdfilter.lustre-OST0000.stats=clear obdfilter.lustre-OST0001.stats=clear 1+0 records in 1+0 records out 524288 bytes (524 kB) copied, 0.0329569 s, 15.9 MB/s 1+0 records in 1+0 records out 1024 bytes (1.0 kB) copied, 0.0107834 s, 95.0 kB/s Waiting for MDT destroys to complete PASS 133c (23s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 133d: Verifying rename_stats ================================================================================================== 22:06:58 (1713406018) mdt.lustre-MDT0000.rename_stats mdt.lustre-MDT0001.rename_stats mdt.lustre-MDT0000.rename_stats=clear mdt.lustre-MDT0001.rename_stats=clear total: 512 open/close in 1.97 seconds: 259.89 ops/second source rename dir size: 32K target rename dir size: 4K mdt.lustre-MDT0000.rename_stats= rename_stats: - snapshot_time: 1713406022.578239305 - start_time: 1713406019.116722700 - elapsed_time: 3.461516605 - same_dir: 32KB: { sample: 1, pct: 100, cum_pct: 100 } Check same dir rename stats success mdt.lustre-MDT0000.rename_stats=clear mdt.lustre-MDT0001.rename_stats=clear source rename dir size: 32K target rename dir size: 4K mdt.lustre-MDT0000.rename_stats= rename_stats: - snapshot_time: 1713406024.011220658 - start_time: 1713406023.587522749 - elapsed_time: 0.423697909 - crossdir_src: 32KB: { sample: 1, pct: 100, cum_pct: 100 } - crossdir_tgt: 4KB: { sample: 1, pct: 100, cum_pct: 100 } Check cross dir rename stats success PASS 133d (11s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 133e: Verifying OST {read,write}_bytes nid stats =========================================================================== 22:07:12 (1713406032) 42+0 records in 42+0 records out 1376256 bytes (1.4 MB) copied, 0.0888975 s, 15.5 MB/s 42+0 records in 42+0 records out 1376256 bytes (1.4 MB) copied, 0.0750124 s, 18.3 MB/s PASS 133e (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 133f: Check reads/writes of client lustre proc files with bad area io ========================================================== 22:07:18 (1713406038) cln..Stopping clients: oleg146-client.virtnet /mnt/lustre (opts:) Stopping client oleg146-client.virtnet /mnt/lustre opts: Stopping clients: oleg146-client.virtnet /mnt/lustre2 (opts:) Stopping /mnt/lustre-mds1 (opts:-f) on oleg146-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg146-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg146-server Stopping /mnt/lustre-ost2 (opts:-f) on oleg146-server unloading modules on: 'oleg146-server' oleg146-server: oleg146-server.virtnet: executing unload_modules_local modules unloaded. mnt..Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions ../libcfs/libcfs/libcfs options: 'cpu_npartitions=2' ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' quota/lquota options: 'hash_lqs_cur_bits=3' loading modules on: 'oleg146-server' oleg146-server: oleg146-server.virtnet: executing load_modules_local oleg146-server: Loading modules from /home/green/git/lustre-release/lustre oleg146-server: detected 4 online CPUs by sysfs oleg146-server: Force libcfs to create 2 CPU partitions oleg146-server: ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' oleg146-server: quota/lquota options: 'hash_lqs_cur_bits=3' Checking servers environments Checking clients oleg146-client.virtnet environments Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions loading modules on: 'oleg146-server' oleg146-server: oleg146-server.virtnet: executing load_modules_local oleg146-server: Loading modules from /home/green/git/lustre-release/lustre oleg146-server: detected 4 online CPUs by sysfs oleg146-server: Force libcfs to create 2 CPU partitions Setup mgs, mdt, osts pdsh@oleg146-client: oleg146-server: ssh exited with exit code 1 Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg146-server: oleg146-server.virtnet: executing set_default_debug all all pdsh@oleg146-client: oleg146-server: ssh exited with exit code 1 Started lustre-MDT0000 pdsh@oleg146-client: oleg146-server: ssh exited with exit code 1 Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg146-server: oleg146-server.virtnet: executing set_default_debug all all pdsh@oleg146-client: oleg146-server: ssh exited with exit code 1 Started lustre-MDT0001 pdsh@oleg146-client: oleg146-server: ssh exited with exit code 1 Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg146-server: oleg146-server.virtnet: executing set_default_debug all all pdsh@oleg146-client: oleg146-server: ssh exited with exit code 1 Started lustre-OST0000 pdsh@oleg146-client: oleg146-server: ssh exited with exit code 1 Starting ost2: -o localrecov /dev/mapper/ost2_flakey /mnt/lustre-ost2 seq.cli-lustre-OST0001-super.width=65536 oleg146-server: oleg146-server.virtnet: executing set_default_debug all all pdsh@oleg146-client: oleg146-server: ssh exited with exit code 1 Started lustre-OST0001 Starting client: oleg146-client.virtnet: -o user_xattr,flock oleg146-server@tcp:/lustre /mnt/lustre Starting client oleg146-client.virtnet: -o user_xattr,flock oleg146-server@tcp:/lustre /mnt/lustre Started clients oleg146-client.virtnet: 192.168.201.146@tcp:/lustre on /mnt/lustre type lustre (rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project) Using TIMEOUT=20 osc.lustre-OST0000-osc-ffff88012e853000.idle_timeout=debug osc.lustre-OST0001-osc-ffff88012e853000.idle_timeout=debug disable quota as required done PASS 133f (101s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 133g: Check reads/writes of server lustre proc files with bad area io ========================================================== 22:09:01 (1713406141) cln..Stopping clients: oleg146-client.virtnet /mnt/lustre (opts:) Stopping client oleg146-client.virtnet /mnt/lustre opts: Stopping clients: oleg146-client.virtnet /mnt/lustre2 (opts:) Stopping /mnt/lustre-mds1 (opts:-f) on oleg146-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg146-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg146-server Stopping /mnt/lustre-ost2 (opts:-f) on oleg146-server unloading modules on: 'oleg146-server' oleg146-server: oleg146-server.virtnet: executing unload_modules_local modules unloaded. mnt..Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions ../libcfs/libcfs/libcfs options: 'cpu_npartitions=2' ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' quota/lquota options: 'hash_lqs_cur_bits=3' loading modules on: 'oleg146-server' oleg146-server: oleg146-server.virtnet: executing load_modules_local oleg146-server: Loading modules from /home/green/git/lustre-release/lustre oleg146-server: detected 4 online CPUs by sysfs oleg146-server: Force libcfs to create 2 CPU partitions oleg146-server: ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' oleg146-server: quota/lquota options: 'hash_lqs_cur_bits=3' Checking servers environments Checking clients oleg146-client.virtnet environments Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions libkmod: kmod_module_get_holders: could not open '/sys/module/acpi_cpufreq/holders': No such file or directory loading modules on: 'oleg146-server' oleg146-server: oleg146-server.virtnet: executing load_modules_local oleg146-server: Loading modules from /home/green/git/lustre-release/lustre oleg146-server: detected 4 online CPUs by sysfs oleg146-server: Force libcfs to create 2 CPU partitions oleg146-server: libkmod: kmod_module_get_holders: could not open '/sys/module/pcc_cpufreq/holders': No such file or directory Setup mgs, mdt, osts pdsh@oleg146-client: oleg146-server: ssh exited with exit code 1 Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg146-server: oleg146-server.virtnet: executing set_default_debug all all pdsh@oleg146-client: oleg146-server: ssh exited with exit code 1 Started lustre-MDT0000 pdsh@oleg146-client: oleg146-server: ssh exited with exit code 1 Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg146-server: oleg146-server.virtnet: executing set_default_debug all all pdsh@oleg146-client: oleg146-server: ssh exited with exit code 1 Started lustre-MDT0001 pdsh@oleg146-client: oleg146-server: ssh exited with exit code 1 Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg146-server: oleg146-server.virtnet: executing set_default_debug all all pdsh@oleg146-client: oleg146-server: ssh exited with exit code 1 Started lustre-OST0000 pdsh@oleg146-client: oleg146-server: ssh exited with exit code 1 Starting ost2: -o localrecov /dev/mapper/ost2_flakey /mnt/lustre-ost2 seq.cli-lustre-OST0001-super.width=65536 oleg146-server: oleg146-server.virtnet: executing set_default_debug all all pdsh@oleg146-client: oleg146-server: ssh exited with exit code 1 Started lustre-OST0001 Starting client: oleg146-client.virtnet: -o user_xattr,flock oleg146-server@tcp:/lustre /mnt/lustre Starting client oleg146-client.virtnet: -o user_xattr,flock oleg146-server@tcp:/lustre /mnt/lustre Started clients oleg146-client.virtnet: 192.168.201.146@tcp:/lustre on /mnt/lustre type lustre (rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project) Using TIMEOUT=20 osc.lustre-OST0000-osc-ffff88008666e000.idle_timeout=debug osc.lustre-OST0001-osc-ffff88008666e000.idle_timeout=debug disable quota as required done PASS 133g (134s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 133h: Proc files should end with newlines ========================================================== 22:11:17 (1713406277) PASS 133h (310s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 134a: Server reclaims locks when reaching lock_reclaim_threshold ========================================================== 22:16:30 (1713406590) total: 1000 open/close in 4.26 seconds: 234.70 ops/second fail_loc=0x327 fail_val=500 sleep 10 seconds ... fail_loc=0 fail_val=0 - unlinked 0 (time 1713406608 ; total 0 ; last 0) total: 1000 unlinks in 5 seconds: 200.000000 unlinks/second PASS 134a (25s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 134b: Server rejects lock request when reaching lock_limit_mb ========================================================== 22:16:57 (1713406617) ldlm.lock_reclaim_threshold_mb=0 fail_loc=0x328 fail_val=500 debug=+trace Sleep 20 seconds ... fail_loc=0 fail_val=0 oleg146-server: error: set_param: setting /sys/kernel/debug/lustre/ldlm/lock_reclaim_threshold_mb=746m: Invalid argument oleg146-server: error: set_param: setting 'ldlm/lock_reclaim_threshold_mb'='746m': Invalid argument pdsh@oleg146-client: oleg146-server: ssh exited with exit code 22 - open/close 441 (time 1713406631.33 total 10.92 last 40.38) total: 600 open/close in 20.62 seconds: 29.09 ops/second - unlinked 0 (time 1713406642 ; total 0 ; last 0) total: 600 unlinks in 1 seconds: 600.000000 unlinks/second PASS 134b (28s) debug_raw_pointers=0 debug_raw_pointers=0 SKIP: sanity test_135 skipping SLOW test 135 SKIP: sanity test_136 skipping SLOW test 136 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 140: Check reasonable stack depth (shouldn't LBUG) ============================================================== 22:17:29 (1713406649) striped dir -i0 -c2 -H crush2 /mnt/lustre/d140.sanity striped dir -i0 -c2 -H all_char 1 striped dir -i0 -c2 -H crush 2 striped dir -i0 -c2 -H crush 3 striped dir -i0 -c2 -H all_char 4 striped dir -i0 -c2 -H crush 5 striped dir -i0 -c2 -H crush2 6 striped dir -i0 -c2 -H crush2 7 striped dir -i0 -c2 -H fnv_1a_64 8 striped dir -i0 -c2 -H crush2 9 striped dir -i0 -c2 -H crush 10 striped dir -i0 -c2 -H all_char 11 striped dir -i0 -c2 -H crush 12 striped dir -i0 -c2 -H crush2 13 striped dir -i0 -c2 -H crush 14 striped dir -i0 -c2 -H crush 15 striped dir -i0 -c2 -H crush 16 striped dir -i0 -c2 -H fnv_1a_64 17 striped dir -i0 -c2 -H fnv_1a_64 18 striped dir -i0 -c2 -H fnv_1a_64 19 striped dir -i0 -c2 -H crush 20 striped dir -i0 -c2 -H crush 21 striped dir -i0 -c2 -H all_char 22 striped dir -i0 -c2 -H crush 23 striped dir -i0 -c2 -H all_char 24 striped dir -i0 -c2 -H all_char 25 striped dir -i0 -c2 -H fnv_1a_64 26 striped dir -i0 -c2 -H crush 27 striped dir -i0 -c2 -H fnv_1a_64 28 striped dir -i0 -c2 -H crush 29 striped dir -i0 -c2 -H crush 30 striped dir -i0 -c2 -H crush2 31 striped dir -i0 -c2 -H crush2 32 striped dir -i0 -c2 -H crush 33 striped dir -i0 -c2 -H fnv_1a_64 34 striped dir -i0 -c2 -H fnv_1a_64 35 striped dir -i0 -c2 -H fnv_1a_64 36 striped dir -i0 -c2 -H all_char 37 striped dir -i0 -c2 -H crush 38 striped dir -i0 -c2 -H crush 39 striped dir -i0 -c2 -H crush 40 striped dir -i0 -c2 -H crush 41 The symlink depth = 40 open symlink_self returns 40 PASS 140 (16s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 150a: truncate/append tests =============== 22:17:47 (1713406667) 1+0 records in 1+0 records out 6096 bytes (6.1 kB) copied, 0.00045344 s, 13.4 MB/s Stopping client oleg146-client.virtnet /mnt/lustre (opts:) Starting client: oleg146-client.virtnet: -o user_xattr,flock oleg146-server@tcp:/lustre /mnt/lustre Filesystem 1024-blocks Used Available Capacity Mounted on 192.168.201.146@tcp:/lustre 7666232 12468 7201540 1% /mnt/lustre Waiting for MDT destroys to complete PASS 150a (10s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 150b: Verify fallocate (prealloc) functionality ========================================================== 22:18:00 (1713406680) keep default fallocate mode: 0 Waiting for MDT destroys to complete PASS 150b (18s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 150bb: Verify fallocate modes both zero space ========================================================== 22:18:20 (1713406700) keep default fallocate mode: 0 20+0 records in 20+0 records out 20971520 bytes (21 MB) copied, 0.679336 s, 30.9 MB/s osd-ldiskfs.lustre-MDT0000.fallocate_zero_blocks=1 osd-ldiskfs.lustre-MDT0001.fallocate_zero_blocks=1 osd-ldiskfs.lustre-OST0000.fallocate_zero_blocks=1 osd-ldiskfs.lustre-OST0001.fallocate_zero_blocks=1 osd-ldiskfs.lustre-MDT0000.fallocate_zero_blocks=0 osd-ldiskfs.lustre-MDT0001.fallocate_zero_blocks=0 osd-ldiskfs.lustre-OST0000.fallocate_zero_blocks=0 osd-ldiskfs.lustre-OST0001.fallocate_zero_blocks=0 Waiting for MDT destroys to complete PASS 150bb (18s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 150c: Verify fallocate Size and Blocks ==== 22:18:40 (1713406720) keep default fallocate mode: 0 verify fallocate on PFL file Waiting for MDT destroys to complete PASS 150c (13s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 150d: Verify fallocate Size and Blocks - Non zero start ========================================================== 22:18:55 (1713406735) keep default fallocate mode: 0 Waiting for MDT destroys to complete PASS 150d (13s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 150e: Verify 60% of available OST space consumed by fallocate ========================================================== 22:19:11 (1713406751) keep default fallocate mode: 0 df before: UUID 1K-blocks Used Available Use% Mounted on lustre-MDT0000_UUID 1414116 8568 1279120 1% /mnt/lustre[MDT:0] lustre-MDT0001_UUID 1414116 6500 1281188 1% /mnt/lustre[MDT:1] lustre-OST0000_UUID 3833116 8788 3598232 1% /mnt/lustre[OST:0] lustre-OST0001_UUID 3833116 3672 3603348 1% /mnt/lustre[OST:1] filesystem_summary: 7666232 12460 7201580 1% /mnt/lustre 'fallocate -l 204800k /mnt/lustre/f150e.sanity' succeeded df after fallocate: UUID 1K-blocks Used Available Use% Mounted on lustre-MDT0000_UUID 1414116 8568 1279120 1% /mnt/lustre[MDT:0] lustre-MDT0001_UUID 1414116 6500 1281188 1% /mnt/lustre[MDT:1] lustre-OST0000_UUID 3833116 111188 3495832 4% /mnt/lustre[OST:0] lustre-OST0001_UUID 3833116 106072 3500948 3% /mnt/lustre[OST:1] filesystem_summary: 7666232 217260 6996780 4% /mnt/lustre Waiting for MDT destroys to complete df after unlink: UUID 1K-blocks Used Available Use% Mounted on lustre-MDT0000_UUID 1414116 8568 1279120 1% /mnt/lustre[MDT:0] lustre-MDT0001_UUID 1414116 6500 1281188 1% /mnt/lustre[MDT:1] lustre-OST0000_UUID 3833116 8788 3598232 1% /mnt/lustre[OST:0] lustre-OST0001_UUID 3833116 3672 3603348 1% /mnt/lustre[OST:1] filesystem_summary: 7666232 12460 7201580 1% /mnt/lustre Waiting for MDT destroys to complete PASS 150e (14s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 150f: Verify fallocate punch functionality ========================================================== 22:19:27 (1713406767) keep default fallocate mode: 0 Verify fallocate punch: Range within the file range 5+0 records in 5+0 records out 20480 bytes (20 kB) copied, 0.0115015 s, 1.8 MB/s Verify fallocate punch: Range overlapping and less than blocksize 5+0 records in 5+0 records out 20480 bytes (20 kB) copied, 0.00543765 s, 3.8 MB/s Waiting for MDT destroys to complete PASS 150f (11s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 150g: Verify fallocate punch on large range ========================================================== 22:19:40 (1713406780) keep default fallocate mode: 0 Verify fallocate punch: Very large Range 256+0 records in 256+0 records out 1048576 bytes (1.0 MB) copied, 0.156737 s, 6.7 MB/s 256+0 records in 256+0 records out 1048576 bytes (1.0 MB) copied, 0.154811 s, 6.8 MB/s 1024+0 records in 1024+0 records out 4194304 bytes (4.2 MB) copied, 0.52783 s, 7.9 MB/s punch_size = 109043712 size - punch_size: 8192 size - punch_size in blocks: 2 fallocate -p --offset 4096 -l 109043712 /mnt/lustre/f150g.sanity Hole at [4096, 109047808) Waiting for MDT destroys to complete PASS 150g (13s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 150h: Verify extend fallocate updates the file size ========================================================== 22:19:55 (1713406795) keep default fallocate mode: 0 SKIP: sanity test_150h Test must be statx() syscall supported SKIP 150h (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 151: test cache on oss and controls ========================================================================================= 22:19:59 (1713406799) striped dir -i1 -c2 -H crush2 /mnt/lustre/d151.sanity-check 4+0 records in 4+0 records out 16384 bytes (16 kB) copied, 0.0108052 s, 1.5 MB/s BEFORE:10252 AFTER:10256 4+0 records in 4+0 records out 16384 bytes (16 kB) copied, 0.010729 s, 1.5 MB/s BEFORE:4 AFTER:8 fail_loc=0x609 3+0 records in 3+0 records out 12288 bytes (12 kB) copied, 0.00942425 s, 1.3 MB/s fail_loc=0 PASS 151 (10s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 152: test read/write with enomem ====================================================================================== 22:20:12 (1713406812) fail_loc=0x80000226 1+0 records in 1+0 records out 6096 bytes (6.1 kB) copied, 0.000462571 s, 13.2 MB/s fail_loc=0 fail_loc=0x80000226 fail_loc=0 PASS 152 (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 153: test if fdatasync does not crash ================================================================================= 22:20:16 (1713406816) PASS 153 (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 154A: lfs path2fid and fid2path basic checks ========================================================== 22:20:21 (1713406821) /mnt/lustre [0x2000013a2:0x15:0x0] /mnt/lustre/// [0x2000013a2:0x15:0x0] /mnt/lustre/f154A.sanity [0x2000013a2:0x15:0x0] lfs fid2path: cannot resolve mount point for '/mnt/lustre_wrong': No such device PASS 154A (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 154B: verify the ll_decode_linkea tool ==== 22:20:26 (1713406826) PFID: [0x2000013a2:0x16:0x0], name: f154B.sanity PASS 154B (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 154a: Open-by-FID ========================= 22:20:30 (1713406830) stat fid [0x2000013a2:0x18:0x0] File: '/mnt/lustre/.lustre/fid/[0x2000013a2:0x18:0x0]' Size: 159 Blocks: 1 IO Block: 4194304 regular file Device: 2c54f966h/743766374d Inode: 144115272398143512 Links: 1 Access: (0644/-rw-r--r--) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2024-04-17 22:20:31.000000000 -0400 Modify: 2024-04-17 22:20:31.000000000 -0400 Change: 2024-04-17 22:20:31.000000000 -0400 Birth: - touch fid [0x2000013a2:0x18:0x0] write to fid [0x2000013a2:0x18:0x0] read fid [0x2000013a2:0x18:0x0] append write to fid [0x2000013a2:0x18:0x0] rename fid [0x2000013a2:0x18:0x0] mv: cannot move '/mnt/lustre/.lustre/fid/[0x2000013a2:0x18:0x0]' to '/mnt/lustre/f154a.sanity.1': Operation not permitted mv: cannot move '/mnt/lustre/f154a.sanity.1' to '/mnt/lustre/.lustre/fid/[0x2000013a2:0x18:0x0]': Operation not permitted truncate fid [0x2000013a2:0x18:0x0] link fid [0x2000013a2:0x18:0x0] uid=500(sanityusr) gid=500(sanityusr) groups=500(sanityusr) setfacl fid [0x2000013a2:0x18:0x0] getfacl fid [0x2000013a2:0x18:0x0] getfacl: Removing leading '/' from absolute path names # file: mnt/lustre/.lustre/fid/[0x2000013a2:0x18:0x0] # owner: root # group: root user::rw- user:sanityusr:rwx group::r-- mask::rwx other::r-- unlink fid [0x2000013a2:0x18:0x0] unlink: cannot unlink '/mnt/lustre/.lustre/fid/[0x2000013a2:0x18:0x0]': Operation not permitted mknod fid [0x2000013a2:0x18:0x0] mknod: '/mnt/lustre/.lustre/fid/[0x2000013a2:0x18:0x0]': Operation not permitted stat non-exist fid [0xf00000400:0x1:0x0] stat: cannot stat '/mnt/lustre/.lustre/fid/[0xf00000400:0x1:0x0]': No such file or directory write to non-exist fid [0xf00000400:0x1:0x0] /home/green/git/lustre-release/lustre/tests/sanity.sh: line 16999: /mnt/lustre/.lustre/fid/[0xf00000400:0x1:0x0]: Operation not permitted link new fid [0xf00000400:0x1:0x0] ln: failed to create hard link '/mnt/lustre/.lustre/fid/[0xf00000400:0x1:0x0]' => '/mnt/lustre/f154a.sanity': Operation not permitted ls [0x2400013a2:0x4:0x0] f154a.sanity touch [0x2400013a2:0x4:0x0]/f154a.sanity.1 touch /mnt/lustre/.lustre/fid/f154a.sanity touch: setting times of '/mnt/lustre/.lustre/fid/f154a.sanity': No such file or directory setxattr to /mnt/lustre/.lustre/fid listxattr for /mnt/lustre/.lustre/fid getfattr: Removing leading '/' from absolute path names # file: mnt/lustre/.lustre/fid trusted.lma=0sAAAAAAAAAAACAAAAAgAAAAIAAAAAAAAA trusted.name1="value1" delxattr from /mnt/lustre/.lustre/fid touch invalid fid: /mnt/lustre/.lustre/fid/[0x200000400:0x2:0x3] touch: setting times of '/mnt/lustre/.lustre/fid/[0x200000400:0x2:0x3]': No such file or directory touch non-normal fid: /mnt/lustre/.lustre/fid/[0x1:0x2:0x0] touch: setting times of '/mnt/lustre/.lustre/fid/[0x1:0x2:0x0]': No such file or directory rename d154a.sanity to /mnt/lustre/.lustre/fid rename '/mnt/lustre/d154a.sanity' returned -1: Operation not permitted change mode of /mnt/lustre/.lustre/fid to 777 restore mode of /mnt/lustre/.lustre/fid to 100 Succeed in opening file "/mnt/lustre/f154a.sanity-2"(flags=O_LOV_DELAY_CREATE) cp /etc/passwd /mnt/lustre/.lustre/fid/[0x2000013a2:0x1f:0x0] cp /etc/passwd /mnt/lustre/f154a.sanity-2 diff /etc/passwd /mnt/lustre/.lustre/fid/[0x2000013a2:0x1f:0x0] rm: cannot remove '/mnt/lustre/.lustre/lost+found/MDT0001': Operation not permitted rm: cannot remove '/mnt/lustre/.lustre/lost+found/MDT0000': Operation not permitted rm: cannot remove '/mnt/lustre/.lustre/fid': Operation not permitted touch: setting times of '/mnt/lustre/.lustre/file': No such file or directory mkdir: cannot create directory '/mnt/lustre/.lustre/dir': Operation not permitted PASS 154a (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 154b: Open-by-FID for remote directory ==== 22:20:36 (1713406836) stat fid [0x2400013a2:0x9:0x0] File: '/mnt/lustre/.lustre/fid/[0x2400013a2:0x9:0x0]' Size: 159 Blocks: 1 IO Block: 4194304 regular file Device: 2c54f966h/743766374d Inode: 162129670907625481 Links: 1 Access: (0644/-rw-r--r--) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2024-04-17 22:20:37.000000000 -0400 Modify: 2024-04-17 22:20:37.000000000 -0400 Change: 2024-04-17 22:20:37.000000000 -0400 Birth: - touch fid [0x2400013a2:0x9:0x0] write to fid [0x2400013a2:0x9:0x0] read fid [0x2400013a2:0x9:0x0] append write to fid [0x2400013a2:0x9:0x0] rename fid [0x2400013a2:0x9:0x0] mv: cannot move '/mnt/lustre/.lustre/fid/[0x2400013a2:0x9:0x0]' to '/mnt/lustre/d154b.sanity/remote_dir/f154b.sanity.1': Operation not permitted mv: cannot move '/mnt/lustre/d154b.sanity/remote_dir/f154b.sanity.1' to '/mnt/lustre/.lustre/fid/[0x2400013a2:0x9:0x0]': Operation not permitted truncate fid [0x2400013a2:0x9:0x0] link fid [0x2400013a2:0x9:0x0] uid=500(sanityusr) gid=500(sanityusr) groups=500(sanityusr) setfacl fid [0x2400013a2:0x9:0x0] getfacl fid [0x2400013a2:0x9:0x0] getfacl: Removing leading '/' from absolute path names # file: mnt/lustre/.lustre/fid/[0x2400013a2:0x9:0x0] # owner: root # group: root user::rw- user:sanityusr:rwx group::r-- mask::rwx other::r-- unlink fid [0x2400013a2:0x9:0x0] unlink: cannot unlink '/mnt/lustre/.lustre/fid/[0x2400013a2:0x9:0x0]': Operation not permitted mknod fid [0x2400013a2:0x9:0x0] mknod: '/mnt/lustre/.lustre/fid/[0x2400013a2:0x9:0x0]': Operation not permitted stat non-exist fid [0xf00000400:0x1:0x0] stat: cannot stat '/mnt/lustre/.lustre/fid/[0xf00000400:0x1:0x0]': No such file or directory write to non-exist fid [0xf00000400:0x1:0x0] /home/green/git/lustre-release/lustre/tests/sanity.sh: line 16999: /mnt/lustre/.lustre/fid/[0xf00000400:0x1:0x0]: Operation not permitted link new fid [0xf00000400:0x1:0x0] ln: failed to create hard link '/mnt/lustre/.lustre/fid/[0xf00000400:0x1:0x0]' => '/mnt/lustre/d154b.sanity/remote_dir/f154b.sanity': Operation not permitted ls [0x2400013a2:0xb:0x0] f154b.sanity touch [0x2400013a2:0xb:0x0]/f154b.sanity.1 touch /mnt/lustre/.lustre/fid/f154b.sanity touch: setting times of '/mnt/lustre/.lustre/fid/f154b.sanity': No such file or directory setxattr to /mnt/lustre/.lustre/fid listxattr for /mnt/lustre/.lustre/fid getfattr: Removing leading '/' from absolute path names # file: mnt/lustre/.lustre/fid trusted.lma=0sAAAAAAAAAAACAAAAAgAAAAIAAAAAAAAA trusted.name1="value1" delxattr from /mnt/lustre/.lustre/fid touch invalid fid: /mnt/lustre/.lustre/fid/[0x200000400:0x2:0x3] touch: setting times of '/mnt/lustre/.lustre/fid/[0x200000400:0x2:0x3]': No such file or directory touch non-normal fid: /mnt/lustre/.lustre/fid/[0x1:0x2:0x0] touch: setting times of '/mnt/lustre/.lustre/fid/[0x1:0x2:0x0]': No such file or directory rename d154b.sanity to /mnt/lustre/.lustre/fid rename '/mnt/lustre/d154b.sanity/remote_dir/d154b.sanity' returned -1: Operation not permitted change mode of /mnt/lustre/.lustre/fid to 777 restore mode of /mnt/lustre/.lustre/fid to 100 Succeed in opening file "/mnt/lustre/d154b.sanity/remote_dir/f154b.sanity-2"(flags=O_LOV_DELAY_CREATE) cp /etc/passwd /mnt/lustre/.lustre/fid/[0x2400013a2:0xe:0x0] cp /etc/passwd /mnt/lustre/d154b.sanity/remote_dir/f154b.sanity-2 diff /etc/passwd /mnt/lustre/.lustre/fid/[0x2400013a2:0xe:0x0] PASS 154b (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 154c: lfs path2fid and fid2path multiple arguments ========================================================== 22:20:42 (1713406842) PASS 154c (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 154d: Verify open file fid ================ 22:20:47 (1713406847) mdt.lustre-MDT0000.exports.192.168.201.46@tcp.open_files= [0x2000013a2:0x13:0x0] [0x200000002:0x1:0x0] [0x200000002:0x3:0x0] [0x200000002:0x2:0x0] [0x2000013a2:0x2a:0x0] PASS 154d (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 154e: .lustre is not returned by readdir == 22:20:52 (1713406852) PASS 154e (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 154f: get parent fids by reading link ea == 22:20:57 (1713406857) [0x2000013a2:0x2d:0x0]/f154f.sanity [0x2000013a2:0x2e:0x0]/link [0x2000013a2:0x2d:0x0]/f154f.sanity [0x2000013a2:0x2e:0x0]/link [0x2000013a2:0x2d:0x0]/f154f.sanity [0x2000013a2:0x2e:0x0]/link [0x2000013a2:0x2d:0x0]/f154f.sanity [0x2000013a2:0x2e:0x0]/link [0x200000007:0x1:0x0]/f llite.lustre-ffff8800b060b000.xattr_cache=1 [0x2000013a2:0x2e:0x0]/link [0x2000013a2:0x2e:0x0]/f154f.sanity.moved PASS 154f (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 154g: various llapi FID tests ============= 22:21:02 (1713406862) Starting test test10 at 1713406862 Finishing test test10 at 1713406862 Starting test test11 at 1713406863 Finishing test test11 at 1713406863 Starting test test12 at 1713406863 Finishing test test12 at 1713406863 Starting test test20 at 1713406863 Finishing test test20 at 1713407147 Starting test test30 at 1713407209 Was able to store 155 links in the EA Finishing test test30 at 1713407225 Starting test test31 at 1713407235 Finishing test test31 at 1713407235 Starting test test40 at 1713407235 Finishing test test40 at 1713407235 Starting test test41 at 1713407235 Finishing test test41 at 1713407235 Starting test test42 at 1713407235 Finishing test test42 at 1713407238 PASS 154g (382s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 154h: Verify interactive path2fid ========= 22:27:26 (1713407246) [0x2000013a2:0x88c:0x0] PASS 154h (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 155a: Verify small file correctness: read cache:on write_cache:on ========================================================== 22:27:31 (1713407251) 1+0 records in 1+0 records out 6096 bytes (6.1 kB) copied, 0.00043356 s, 14.1 MB/s PASS 155a (5s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 155b: Verify small file correctness: read cache:on write_cache:off ========================================================== 22:27:38 (1713407258) 1+0 records in 1+0 records out 6096 bytes (6.1 kB) copied, 0.000440153 s, 13.8 MB/s PASS 155b (5s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 155c: Verify small file correctness: read cache:off write_cache:on ========================================================== 22:27:45 (1713407265) 1+0 records in 1+0 records out 6096 bytes (6.1 kB) copied, 0.00040456 s, 15.1 MB/s PASS 155c (5s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 155d: Verify small file correctness: read cache:off write_cache:off ========================================================== 22:27:52 (1713407272) 1+0 records in 1+0 records out 6096 bytes (6.1 kB) copied, 0.000444287 s, 13.7 MB/s PASS 155d (5s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 155e: Verify big file correctness: read cache:on write_cache:on ========================================================== 22:28:00 (1713407280) Waiting for MDT destroys to complete OST kbytes available: 3598224 3603344 Min free space: OST 0: 3598224 Max free space: OST 1: 3603344 OSS cache size: 65536 KB Large file size: 131072 KB 1024+0 records in 1024+0 records out 134217728 bytes (134 MB) copied, 1.08734 s, 123 MB/s -rw-r--r-- 1 root root 128M Apr 17 22:28 /mnt/lustre/f155e.sanity -rw-r--r-- 1 root root 128M Apr 17 22:28 /tmp/f155e.sanity PASS 155e (16s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 155f: Verify big file correctness: read cache:on write_cache:off ========================================================== 22:28:17 (1713407297) Waiting for MDT destroys to complete OST kbytes available: 3598224 3603344 Min free space: OST 0: 3598224 Max free space: OST 1: 3603344 OSS cache size: 65536 KB Large file size: 131072 KB 1024+0 records in 1024+0 records out 134217728 bytes (134 MB) copied, 1.07901 s, 124 MB/s -rw-r--r-- 1 root root 128M Apr 17 22:28 /mnt/lustre/f155f.sanity -rw-r--r-- 1 root root 128M Apr 17 22:28 /tmp/f155f.sanity PASS 155f (22s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 155g: Verify big file correctness: read cache:off write_cache:on ========================================================== 22:28:42 (1713407322) Waiting for MDT destroys to complete OST kbytes available: 3598224 3603344 Min free space: OST 0: 3598224 Max free space: OST 1: 3603344 OSS cache size: 65536 KB Large file size: 131072 KB 1024+0 records in 1024+0 records out 134217728 bytes (134 MB) copied, 1.07564 s, 125 MB/s -rw-r--r-- 1 root root 128M Apr 17 22:28 /mnt/lustre/f155g.sanity -rw-r--r-- 1 root root 128M Apr 17 22:28 /tmp/f155g.sanity PASS 155g (17s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 155h: Verify big file correctness: read cache:off write_cache:off ========================================================== 22:29:01 (1713407341) Waiting for MDT destroys to complete OST kbytes available: 3598224 3603344 Min free space: OST 0: 3598224 Max free space: OST 1: 3603344 OSS cache size: 65536 KB Large file size: 131072 KB 1024+0 records in 1024+0 records out 134217728 bytes (134 MB) copied, 1.07434 s, 125 MB/s -rw-r--r-- 1 root root 128M Apr 17 22:29 /mnt/lustre/f155h.sanity -rw-r--r-- 1 root root 128M Apr 17 22:29 /tmp/f155h.sanity PASS 155h (18s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 156: Verification of tunables ============= 22:29:21 (1713407361) striped dir -i0 -c2 -H fnv_1a_64 /mnt/lustre/d156.sanity-check 4+0 records in 4+0 records out 16384 bytes (16 kB) copied, 0.0075495 s, 2.2 MB/s BEFORE:10266 AFTER:10270 4+0 records in 4+0 records out 16384 bytes (16 kB) copied, 0.00574024 s, 2.9 MB/s BEFORE:131097 AFTER:131101 Turn on read and write cache Write data and read it back. Read should be satisfied from the cache. 3+0 records in 3+0 records out 12288 bytes (12 kB) copied, 0.0108798 s, 1.1 MB/s cache hits: before: 65581, after: 65584 Read again; it should be satisfied from the cache. cache hits:: before: 65584, after: 65587 Turn off the read cache and turn on the write cache Read again; it should be satisfied from the cache. cache hits:: before: 65587, after: 65590 Write data and read it back. Read should be satisfied from the cache. 3+0 records in 3+0 records out 12288 bytes (12 kB) copied, 0.0038266 s, 3.2 MB/s cache hits:: before: 65590, after: 65593 Turn off read and write cache Write data and read it back It should not be satisfied from the cache. 3+0 records in 3+0 records out 12288 bytes (12 kB) copied, 0.00885611 s, 1.4 MB/s cache hits:: before: 65593, after: 65593 Turn on the read cache and turn off the write cache Write data and read it back It should not be satisfied from the cache. 3+0 records in 3+0 records out 12288 bytes (12 kB) copied, 0.0091002 s, 1.4 MB/s cache hits:: before: 65593, after: 65593 Read again; it should be satisfied from the cache. cache hits:: before: 65593, after: 65596 PASS 156 (26s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 160a: changelog sanity ==================== 22:29:49 (1713407389) mdd.lustre-MDT0000.changelog_mask=+hsm mdd.lustre-MDT0001.changelog_mask=+hsm Registered 2 changelog users: 'cl1 cl1' striped dir -i0 -c2 -H crush /mnt/lustre/d160a.sanity/pics/2008/zachy lustre-MDT0000: clear the changelog for cl1 of all records lustre-MDT0001: clear the changelog for cl1 of all records verifying changelog mask mdd.lustre-MDT0000.changelog_mask=-MKDIR mdd.lustre-MDT0001.changelog_mask=-MKDIR mdd.lustre-MDT0000.changelog_mask=-CLOSE mdd.lustre-MDT0001.changelog_mask=-CLOSE striped dir -i0 -c2 -H crush /mnt/lustre/d160a.sanity/pics/zach/sofia mdd.lustre-MDT0000.changelog_mask=+MKDIR mdd.lustre-MDT0001.changelog_mask=+MKDIR mdd.lustre-MDT0000.changelog_mask=+CLOSE mdd.lustre-MDT0001.changelog_mask=+CLOSE striped dir -i0 -c2 -H crush2 /mnt/lustre/d160a.sanity/pics/2008/sofia verifying target fid verifying parent fid getting records for cl1 current_index: 12 ID index (idle) mask cl1 4 (3) lustre-MDT0000: clear the changelog for cl1 to record #7 verifying user clear: 4 + 3 == 7 lustre-MDT0000.12 11CLOSE 02:29:54.447688288 2024.04.18 0x242 t=[0x2000013a2:0x89f:0x0] j=bash.0 ef=0xf u=0:0 nid=192.168.201.46@tcp lustre-MDT0001.1 01CREAT 02:29:52.197095404 2024.04.18 0x0 t=[0x2400013a2:0x12:0x0] j=touch.0 ef=0xf u=0:0 nid=192.168.201.46@tcp p=[0x2000013a2:0x89c:0x0] f160a.sanity lustre-MDT0001.2 11CLOSE 02:29:52.212934937 2024.04.18 0x42 t=[0x2400013a2:0x12:0x0] j=touch.0 ef=0xf u=0:0 nid=192.168.201.46@tcp lustre-MDT0001.3 01CREAT 02:29:52.230331414 2024.04.18 0x0 t=[0x2400013a2:0x13:0x0] j=cp.0 ef=0xf u=0:0 nid=192.168.201.46@tcp p=[0x2000013a2:0x89c:0x0] pic1.jpg lustre-MDT0001.4 11CLOSE 02:29:52.246642001 2024.04.18 0xc2 t=[0x2400013a2:0x13:0x0] j=cp.0 ef=0xf u=0:0 nid=192.168.201.46@tcp verifying user min purge: 7 + 1 == 8 lustre-MDT0000: clear the changelog for cl1 of all records lustre-MDT0001: clear the changelog for cl1 of all records Stopping /mnt/lustre-mds1 (opts:) on oleg146-server Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg146-server: oleg146-server.virtnet: executing set_default_debug all all pdsh@oleg146-client: oleg146-server: ssh exited with exit code 1 Started lustre-MDT0000 verifying index survives MDT restart: 12 == 12 verifying users from this test are deregistered lustre-MDT0000: clear the changelog for cl1 of all records lustre-MDT0000: Deregistered changelog user #1 lustre-MDT0001: clear the changelog for cl1 of all records lustre-MDT0001: Deregistered changelog user #1 current_index: 12 ID index (idle) mask other changelog users; can't verify off lustre-MDT0001: changelog user 'cl1' not found lustre-MDT0000: changelog user 'cl1' not found PASS 160a (21s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 160b: Verify that very long rename doesn't crash in changelog ========================================================== 22:30:13 (1713407413) mdd.lustre-MDT0000.changelog_mask=+hsm mdd.lustre-MDT0001.changelog_mask=+hsm Registered 2 changelog users: 'cl2 cl2' creating very long named file renaming very long named file lustre-MDT0000.15 08RENME 02:30:15.838411780 2024.04.18 0x0 t=[0:0x0:0x0] j=mv.0 ef=0xf u=0:0 nid=192.168.201.46@tcp p=[0x200000007:0x1:0x0] bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb s=[0x2000013a2:0x8a1:0x0] sp=[0x200000007:0x1:0x0] aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa lustre-MDT0001: clear the changelog for cl2 of all records lustre-MDT0001: Deregistered changelog user #2 lustre-MDT0000: clear the changelog for cl2 of all records lustre-MDT0000: Deregistered changelog user #2 PASS 160b (7s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 160c: verify that changelog log catch the truncate event ========================================================== 22:30:22 (1713407422) mdd.lustre-MDT0000.changelog_mask=+hsm mdd.lustre-MDT0001.changelog_mask=+hsm Registered 2 changelog users: 'cl3 cl3' mdd.lustre-MDT0000.changelog_mask=-TRUNC mdd.lustre-MDT0001.changelog_mask=-TRUNC mdd.lustre-MDT0000.changelog_mask=+TRUNC mdd.lustre-MDT0001.changelog_mask=+TRUNC lustre-MDT0001.5 02MKDIR 02:30:24.647321132 2024.04.18 0x0 t=[0x2400013a2:0x14:0x0] j=mkdir.0 ef=0xf u=0:0 nid=192.168.201.46@tcp p=[0x200000007:0x1:0x0] d160c.sanity lustre-MDT0001.6 01CREAT 02:30:24.675506024 2024.04.18 0x0 t=[0x2400013a2:0x15:0x0] j=mcreate.0 ef=0xf u=0:0 nid=192.168.201.46@tcp p=[0x2400013a2:0x14:0x0] foo_160c lustre-MDT0001.7 14SATTR 02:30:25.176742636 2024.04.18 0xe t=[0x2400013a2:0x15:0x0] j=truncate.0 ef=0xf u=0:0 nid=192.168.201.46@tcp p=[0x2400013a2:0x14:0x0] lustre-MDT0001.8 13TRUNC 02:30:25.672922469 2024.04.18 0xe t=[0x2400013a2:0x15:0x0] j=truncate.0 ef=0xf u=0:0 nid=192.168.201.46@tcp p=[0x2400013a2:0x14:0x0] lustre-MDT0001: clear the changelog for cl3 of all records lustre-MDT0001: Deregistered changelog user #3 lustre-MDT0000: clear the changelog for cl3 of all records lustre-MDT0000: Deregistered changelog user #3 PASS 160c (7s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 160d: verify that changelog log catch the migrate event ========================================================== 22:30:32 (1713407432) mdd.lustre-MDT0000.changelog_mask=+hsm mdd.lustre-MDT0001.changelog_mask=+hsm Registered 2 changelog users: 'cl4 cl4' lustre-MDT0000: clear the changelog for cl4 of all records lustre-MDT0001: clear the changelog for cl4 of all records lustre-MDT0001.11 20MIGRT 02:30:34.547451842 2024.04.18 0x0 t=[0x2400013a2:0x18:0x0] j=lfs.0 ef=0xf u=0:0 nid=192.168.201.46@tcp p=[0x2400013a2:0x16:0x0] migrate_dir s=[0x2400013a2:0x17:0x0] sp=[0x2400013a2:0x16:0x0] migrate_dir lustre-MDT0001.12 12LYOUT 02:30:34.595888966 2024.04.18 0x0 t=[0x2400013a2:0x18:0x0] j=lfs.0 ef=0xf u=0:0 nid=192.168.201.46@tcp p=[0x2400013a2:0x16:0x0] lustre-MDT0001: clear the changelog for cl4 of all records lustre-MDT0001: Deregistered changelog user #4 lustre-MDT0000: clear the changelog for cl4 of all records lustre-MDT0000: Deregistered changelog user #4 PASS 160d (6s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 160e: changelog negative testing (should return errors) ========================================================== 22:30:41 (1713407441) mdd.lustre-MDT0000.changelog_mask=+hsm mdd.lustre-MDT0001.changelog_mask=+hsm Registered 2 changelog users: 'cl5 cl5' pdsh@oleg146-client: oleg146-server: ssh exited with exit code 4 deregister an existing changelog user usage: --device changelog_deregister [|cl...] [--help|-h] [--user|-u ] run after connecting to device --device oleg146-server: error: changelog_deregister: User not found pdsh@oleg146-client: oleg146-server: ssh exited with exit code 2 lfs changelog_clear: cannot purge records for 'cl5': Invalid argument (22) changelog_clear: record out of range: 1000000000 lustre-MDT0001: clear the changelog for cl5 of all records lustre-MDT0001: Deregistered changelog user #5 lustre-MDT0000: clear the changelog for cl5 of all records lustre-MDT0000: Deregistered changelog user #5 PASS 160e (6s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 160f: changelog garbage collect (timestamped users) ========================================================== 22:30:50 (1713407450) mdd.lustre-MDT0000.changelog_mask=+hsm mdd.lustre-MDT0001.changelog_mask=+hsm Registered 2 changelog users: 'cl6 cl6' mdd.lustre-MDT0000.changelog_mask=+hsm mdd.lustre-MDT0001.changelog_mask=+hsm Registered 2 changelog users: 'cl6 cl7 cl6 cl7' striped dir -i0 -c2 -H all_char /mnt/lustre/d160f.sanity 1713407454: creating first files mdd.lustre-MDT0000.changelog_max_idle_time=15 mdd.lustre-MDT0001.changelog_max_idle_time=15 mdd.lustre-MDT0000.changelog_gc=1 mdd.lustre-MDT0001.changelog_gc=1 mdd.lustre-MDT0000.changelog_min_gc_interval=2 mdd.lustre-MDT0001.changelog_min_gc_interval=2 mdd.lustre-MDT0000.changelog_min_free_cat_entries=3 mdd.lustre-MDT0001.changelog_min_free_cat_entries=3 1713407458: sleep1 7/15s fail_loc=0x1313 fail_val=3 lustre-MDT0000: clear the changelog for cl6 to record #18 mds1: verifying user cl6 clear: 16 + 2 == 18 lustre-MDT0001: clear the changelog for cl6 to record #14 mds2: verifying user cl6 clear: 12 + 2 == 14 1713407468: sleep2 3/15s 1713407471: creating 4 files pdsh@oleg146-client: oleg146-server: ssh exited with exit code 1 mds1: 1713407473 verify rec 18+1 == 19 mds2: 1713407474 verify rec 14+1 == 15 mdd.lustre-MDT0000.changelog_min_free_cat_entries=2 mdd.lustre-MDT0001.changelog_min_free_cat_entries=2 mdd.lustre-MDT0000.changelog_min_gc_interval=3600 mdd.lustre-MDT0001.changelog_min_gc_interval=3600 mdd.lustre-MDT0000.changelog_gc=1 mdd.lustre-MDT0001.changelog_gc=1 mdd.lustre-MDT0000.changelog_max_idle_time=2592000 mdd.lustre-MDT0001.changelog_max_idle_time=2592000 lustre-MDT0001: changelog user 'cl7' not found lustre-MDT0000: changelog user 'cl7' not found lustre-MDT0001: clear the changelog for cl6 of all records lustre-MDT0001: Deregistered changelog user #6 lustre-MDT0000: clear the changelog for cl6 of all records lustre-MDT0000: Deregistered changelog user #6 PASS 160f (31s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 160g: changelog garbage collect on idle records ========================================================== 22:31:23 (1713407483) mdd.lustre-MDT0000.changelog_mask=+hsm mdd.lustre-MDT0001.changelog_mask=+hsm Registered 2 changelog users: 'cl8 cl8' mdd.lustre-MDT0000.changelog_mask=+hsm mdd.lustre-MDT0001.changelog_mask=+hsm Registered 2 changelog users: 'cl8 cl9 cl8 cl9' striped dir -i0 -c2 -H all_char /mnt/lustre/d160g.sanity mdd.lustre-MDT0000.changelog_max_idle_indexes=2 mdd.lustre-MDT0001.changelog_max_idle_indexes=2 mdd.lustre-MDT0000.changelog_gc=1 mdd.lustre-MDT0001.changelog_gc=1 mdd.lustre-MDT0000.changelog_min_gc_interval=2 mdd.lustre-MDT0001.changelog_min_gc_interval=2 lustre-MDT0000: clear the changelog for cl8 to record #23 mds1: verifying user1 cl8 clear: 21 + 2 == 23 lustre-MDT0001: clear the changelog for cl8 to record #18 mds2: verifying user1 cl8 clear: 16 + 2 == 18 pdsh@oleg146-client: oleg146-server: ssh exited with exit code 1 mds1: 1713407494 verify rec 23+1 == 24 mds2: 1713407495 verify rec 18+1 == 19 mdd.lustre-MDT0000.changelog_min_gc_interval=3600 mdd.lustre-MDT0001.changelog_min_gc_interval=3600 mdd.lustre-MDT0000.changelog_gc=1 mdd.lustre-MDT0001.changelog_gc=1 mdd.lustre-MDT0000.changelog_max_idle_indexes=2097446912 mdd.lustre-MDT0001.changelog_max_idle_indexes=2097446912 lustre-MDT0001: changelog user 'cl9' not found lustre-MDT0000: changelog user 'cl9' not found lustre-MDT0001: clear the changelog for cl8 of all records lustre-MDT0001: Deregistered changelog user #8 lustre-MDT0000: clear the changelog for cl8 of all records lustre-MDT0000: Deregistered changelog user #8 PASS 160g (19s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 160h: changelog gc thread stop upon umount, orphan records delete ========================================================== 22:31:44 (1713407504) mdd.lustre-MDT0000.changelog_mask=+hsm mdd.lustre-MDT0001.changelog_mask=+hsm Registered 2 changelog users: 'cl10 cl10' mdd.lustre-MDT0000.changelog_mask=+hsm mdd.lustre-MDT0001.changelog_mask=+hsm Registered 2 changelog users: 'cl10 cl11 cl10 cl11' striped dir -i0 -c2 -H all_char /mnt/lustre/d160h.sanity mdd.lustre-MDT0000.changelog_max_idle_time=10 mdd.lustre-MDT0001.changelog_max_idle_time=10 mdd.lustre-MDT0000.changelog_gc=1 mdd.lustre-MDT0001.changelog_gc=1 mdd.lustre-MDT0000.changelog_min_gc_interval=2 mdd.lustre-MDT0001.changelog_min_gc_interval=2 lustre-MDT0000: clear the changelog for cl10 to record #27 mds1: verifying user cl10 clear: 25 + 2 == 27 lustre-MDT0001: clear the changelog for cl10 to record #21 mds2: verifying user cl10 clear: 19 + 2 == 21 fail_loc=0x1316 total: 4 create in 0.03 seconds: 120.26 ops/second Stopping /mnt/lustre-mds1 (opts:) on oleg146-server Stopping /mnt/lustre-mds2 (opts:) on oleg146-server fail_loc=0 Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg146-server: oleg146-server.virtnet: executing set_default_debug all all pdsh@oleg146-client: oleg146-server: ssh exited with exit code 1 Started lustre-MDT0000 Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg146-server: oleg146-server.virtnet: executing set_default_debug all all pdsh@oleg146-client: oleg146-server: ssh exited with exit code 1 Started lustre-MDT0001 mds1: verifying first index 27 + 1 == 28 mds2: verifying first index 21 + 1 == 22 mdd.lustre-MDT0000.changelog_min_gc_interval=3600 mdd.lustre-MDT0001.changelog_min_gc_interval=3600 mdd.lustre-MDT0000.changelog_gc=1 mdd.lustre-MDT0001.changelog_gc=1 mdd.lustre-MDT0000.changelog_max_idle_time=2592000 mdd.lustre-MDT0001.changelog_max_idle_time=2592000 lustre-MDT0001: changelog user 'cl11' not found lustre-MDT0000: changelog user 'cl11' not found lustre-MDT0001: clear the changelog for cl10 of all records lustre-MDT0001: Deregistered changelog user #10 lustre-MDT0000: clear the changelog for cl10 of all records lustre-MDT0000: Deregistered changelog user #10 PASS 160h (44s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 160i: changelog user register/unregister race ========================================================== 22:32:31 (1713407551) mdd.lustre-MDT0000.changelog_mask=+hsm mdd.lustre-MDT0001.changelog_mask=+hsm Registered 2 changelog users: 'cl11 cl11' striped dir -i0 -c2 -H all_char /mnt/lustre/d160i.sanity fail_loc=0x10001315 fail_val=1 lustre-MDT0000: clear the changelog for cl11 of all records mdd.lustre-MDT0000.changelog_mask=+hsm lustre-MDT0000: Deregistered changelog user #11 lustre-MDT0001: clear the changelog for cl11 of all records mdd.lustre-MDT0001.changelog_mask=+hsm lustre-MDT0001: Deregistered changelog user #11 Registered 2 changelog users: 'cl11 cl12 cl11 cl12' cl12 33 (1) cl12 25 (1) total: 4 create in 0.02 seconds: 160.17 ops/second verify changelogs are on: 35 != 33 verify changelogs are on: 35 != 33 lustre-MDT0001: clear the changelog for cl12 of all records lustre-MDT0001: Deregistered changelog user #12 lustre-MDT0000: clear the changelog for cl12 of all records lustre-MDT0000: Deregistered changelog user #12 lustre-MDT0001: changelog user 'cl11' not found lustre-MDT0000: changelog user 'cl11' not found PASS 160i (15s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 160j: client can be umounted while its chanangelog is being used ========================================================== 22:32:48 (1713407568) Starting client: oleg146-client.virtnet: -o user_xattr,flock oleg146-server@tcp:/lustre /mnt/lustre2 mdd.lustre-MDT0000.changelog_mask=+hsm mdd.lustre-MDT0001.changelog_mask=+hsm Registered 2 changelog users: 'cl13 cl13' striped dir -i0 -c2 -H all_char /mnt/lustre/d160j.sanity Starting client: oleg146-client.virtnet: -o user_xattr,flock oleg146-server@tcp:/lustre /mnt/lustre lustre-MDT0000: clear the changelog for cl13 of all records lustre-MDT0000: Deregistered changelog user #13 lustre-MDT0001: clear the changelog for cl13 of all records lustre-MDT0001: Deregistered changelog user #13 lustre-MDT0001: changelog user 'cl13' not found lustre-MDT0000: changelog user 'cl13' not found PASS 160j (8s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 160k: Verify that changelog records are not lost ========================================================== 22:32:59 (1713407579) mdd.lustre-MDT0000.changelog_mask=+hsm mdd.lustre-MDT0001.changelog_mask=+hsm Registered 2 changelog users: 'cl14 cl14' fail_loc=0x8000015d fail_val=3 lustre-MDT0000.43 07RMDIR 02:33:02.423627937 2024.04.18 0x1 t=[0x200002342:0x2:0x0] j=rmdir.0 ef=0xf u=0:0 nid=192.168.201.46@tcp p=[0x240001b72:0x1:0x0] 1 lustre-MDT0001: clear the changelog for cl14 of all records lustre-MDT0001: Deregistered changelog user #14 lustre-MDT0000: clear the changelog for cl14 of all records lustre-MDT0000: Deregistered changelog user #14 PASS 160k (14s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 160l: Verify that MTIME changelog records contain the parent FID ========================================================== 22:33:16 (1713407596) mdd.lustre-MDT0000.changelog_mask=+hsm mdd.lustre-MDT0001.changelog_mask=+hsm Registered 2 changelog users: 'cl15 cl15' mdd.lustre-MDT0000.changelog_mask=-CREAT mdd.lustre-MDT0001.changelog_mask=-CREAT mdd.lustre-MDT0000.changelog_mask=-CLOSE mdd.lustre-MDT0001.changelog_mask=-CLOSE striped dir -i0 -c2 -H crush2 /mnt/lustre/d160l.sanity lustre-MDT0001: clear the changelog for cl15 of all records lustre-MDT0001: Deregistered changelog user #15 lustre-MDT0000: clear the changelog for cl15 of all records lustre-MDT0000: Deregistered changelog user #15 PASS 160l (9s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 160m: Changelog clear race ================ 22:33:28 (1713407608) mdd.lustre-MDT0000.changelog_mask=+hsm mdd.lustre-MDT0001.changelog_mask=+hsm Registered 2 changelog users: 'cl16 cl16' mdd.lustre-MDT0000.changelog_mask=+hsm mdd.lustre-MDT0001.changelog_mask=+hsm Registered 2 changelog users: 'cl16 cl17 cl16 cl17' striped dir -i0 -c1 -H fnv_1a_64 /mnt/lustre/d160m.sanity total: 50 create in 0.24 seconds: 212.49 ops/second - unlinked 0 (time 1713407613 ; total 0 ; last 0) total: 50 unlinks in 0 seconds: inf unlinks/second rm: cannot remove '/mnt/lustre/d160m.sanity': Is a directory fail_loc=0x8000015f fail_val=0 lustre-MDT0000: clear the changelog for cl16 to record #54 lustre-MDT0000: clear the changelog for cl17 of all records lustre-MDT0000: clear the changelog for cl16 of all records lustre-MDT0001: clear the changelog for cl17 of all records lustre-MDT0001: Deregistered changelog user #17 lustre-MDT0000: clear the changelog for cl17 of all records lustre-MDT0000: Deregistered changelog user #17 lustre-MDT0001: clear the changelog for cl16 of all records lustre-MDT0001: Deregistered changelog user #16 lustre-MDT0000: clear the changelog for cl16 of all records lustre-MDT0000: Deregistered changelog user #16 PASS 160m (14s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 160n: Changelog destroy race ============== 22:33:44 (1713407624) mdd.lustre-MDT0000.changelog_mask=+hsm mdd.lustre-MDT0001.changelog_mask=+hsm Registered 2 changelog users: 'cl18 cl18' striped dir -i0 -c1 -H all_char /mnt/lustre/d160n.sanity - create 4784 (time 1713407638.71 total 10.00 last 478.38) - create 9345 (time 1713407648.71 total 20.00 last 456.06) total: 10000 create in 21.46 seconds: 465.93 ops/second rename '/mnt/lustre/d160n.sanity/f160n.sanity10000' returned -1: No such file or directory - unlinked 0 (time 1713407819 ; total 0 ; last 0) total: 10000 unlinks in 40 seconds: 250.000000 unlinks/second last record 30146 - create 4568 (time 1713407871.51 total 10.00 last 456.74) - create 9098 (time 1713407881.51 total 20.00 last 452.97) total: 10000 create in 21.95 seconds: 455.50 ops/second rename '/mnt/lustre/d160n.sanity/f160n.sanity10000' returned -1: No such file or directory - unlinked 0 (time 1713408050 ; total 0 ; last 0) total: 10000 unlinks in 41 seconds: 243.902435 unlinks/second last record 60146 - create 4616 (time 1713408102.87 total 10.00 last 461.55) - create 9179 (time 1713408112.87 total 20.00 last 456.22) total: 10000 create in 21.78 seconds: 459.14 ops/second rename '/mnt/lustre/d160n.sanity/f160n.sanity10000' returned -1: No such file or directory - unlinked 0 (time 1713408283 ; total 0 ; last 0) total: 10000 unlinks in 40 seconds: 250.000000 unlinks/second last record 90146 fail_loc=0x8000016c fail_val=0 lustre-MDT0000: clear the changelog for cl18 of all records lustre-MDT0000: clear the changelog for cl18 of all records lustre-MDT0001: clear the changelog for cl18 of all records lustre-MDT0001: Deregistered changelog user #18 lustre-MDT0000: clear the changelog for cl18 of all records lustre-MDT0000: Deregistered changelog user #18 PASS 160n (708s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 160o: changelog user name and mask ======== 22:45:35 (1713408335) mdd.lustre-MDT0000.changelog_mask=+hsm mdd.lustre-MDT0001.changelog_mask=+hsm Registered 2 changelog users: 'cl19-test_160o cl19-test_160o' oleg146-server: error: changelog_register: Invalid argument pdsh@oleg146-client: oleg146-server: ssh exited with exit code 22 oleg146-server: error: changelog_register: User exists pdsh@oleg146-client: oleg146-server: ssh exited with exit code 17 oleg146-server: error: changelog_register: File name too long pdsh@oleg146-client: oleg146-server: ssh exited with exit code 36 mdd.lustre-MDT0000.changelog_mask=MARK+HSM mdd.lustre-MDT0001.changelog_mask=MARK+HSM error: get_param: param_path 'mdd/*/changelog*mask': No such file or directory lustre-MDT0000: clear the changelog for cl19-test_160o of all records lustre-MDT0001: clear the changelog for cl19-test_160o of all records striped dir -i0 -c1 -H crush2 /mnt/lustre/d160o.sanity mdd.lustre-MDT0000.changelog_mask=+hsm mdd.lustre-MDT0001.changelog_mask=+hsm Registered 2 changelog users: 'cl19-test_160o cl20 cl19-test_160o cl23' mdd.lustre-MDT0000.changelog_mask=MARK mdd.lustre-MDT0001.changelog_mask=MARK mdd.lustre-MDT0000.changelog_mask=CLOSE,UNLNK mdd.lustre-MDT0001.changelog_mask=CLOSE,UNLNK lustre-MDT0000: Deregistered changelog user #19 lustre-MDT0001: clear the changelog for cl20 of all records lustre-MDT0001: Deregistered changelog user #20 lustre-MDT0000: clear the changelog for cl23 of all records lustre-MDT0000: Deregistered changelog user #23 lustre-MDT0001: clear the changelog for cl19-test_160o of all records lustre-MDT0001: Deregistered changelog user #19 lustre-MDT0000: changelog user 'cl19-test_160o' not found PASS 160o (14s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 160p: Changelog orphan cleanup with no users ========================================================== 22:45:52 (1713408352) mdd.lustre-MDT0000.changelog_mask=+hsm mdd.lustre-MDT0001.changelog_mask=+hsm Registered 2 changelog users: 'cl21 cl24' striped dir -i0 -c1 -H crush2 /mnt/lustre/d160p.sanity total: 50 create in 0.24 seconds: 212.49 ops/second - unlinked 0 (time 1713408355 ; total 0 ; last 0) total: 50 unlinks in 0 seconds: inf unlinks/second Stopping /mnt/lustre-mds1 (opts:) on oleg146-server oleg146-server: debugfs 1.46.2.wc5 (26-Mar-2022) Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg146-server: oleg146-server.virtnet: executing set_default_debug all all pdsh@oleg146-client: oleg146-server: ssh exited with exit code 1 Started lustre-MDT0000 lustre-MDT0001: clear the changelog for cl21 of all records lustre-MDT0001: Deregistered changelog user #21 lustre-MDT0000: changelog user 'cl24' not found PASS 160p (16s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 160q: changelog effective mask is DEFMASK if not set ========================================================== 22:46:10 (1713408370) mdd.lustre-MDT0000.changelog_mask=MARK mdd.lustre-MDT0001.changelog_mask=MARK lustre-MDT0000: Deregistered changelog user #1 PASS 160q (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 160s: changelog garbage collect on idle records * time ========================================================== 22:46:16 (1713408376) fail_loc=0x1314 fail_val=864000 mdd.lustre-MDT0000.changelog_mask=+hsm mdd.lustre-MDT0001.changelog_mask=+hsm Registered 2 changelog users: 'cl22 cl2' striped dir -i0 -c2 -H all_char /mnt/lustre/d160s.sanity mdd.lustre-MDT0000.changelog_max_idle_indexes=2097446912 mdd.lustre-MDT0001.changelog_max_idle_indexes=2097446912 mdd.lustre-MDT0000.changelog_max_idle_time=2592000 mdd.lustre-MDT0001.changelog_max_idle_time=2592000 mdd.lustre-MDT0000.changelog_gc=1 mdd.lustre-MDT0001.changelog_gc=1 mdd.lustre-MDT0000.changelog_min_gc_interval=2 mdd.lustre-MDT0001.changelog_min_gc_interval=2 fail_loc=0x16d fail_val=500000000 sleep 2 for interval pdsh@oleg146-client: oleg146-server: ssh exited with exit code 1 fail_loc=0 mdd.lustre-MDT0000.changelog_min_gc_interval=3600 mdd.lustre-MDT0001.changelog_min_gc_interval=3600 mdd.lustre-MDT0000.changelog_gc=1 mdd.lustre-MDT0001.changelog_gc=1 mdd.lustre-MDT0000.changelog_max_idle_time=2592000 mdd.lustre-MDT0001.changelog_max_idle_time=2592000 mdd.lustre-MDT0000.changelog_max_idle_indexes=2097446912 mdd.lustre-MDT0001.changelog_max_idle_indexes=2097446912 lustre-MDT0001: changelog user 'cl22' not found lustre-MDT0000: changelog user 'cl2' not found PASS 160s (15s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 160t: changelog garbage collect on lack of space ========================================================== 22:46:34 (1713408394) mdd.lustre-MDT0000.changelog_mask=+hsm mdd.lustre-MDT0001.changelog_mask=+hsm Registered 2 changelog users: 'cl23-user1 cl3-user1' total: 2000 open/close in 8.80 seconds: 227.39 ops/second mdd.lustre-MDT0000.changelog_mask=+hsm mdd.lustre-MDT0001.changelog_mask=+hsm Registered 2 changelog users: 'cl23-user1 cl24-user2 cl3-user1 cl4-user2' total: 500 open/close in 2.17 seconds: 230.02 ops/second mdd.lustre-MDT0000.changelog_gc=1 mdd.lustre-MDT0000.changelog_min_gc_interval=2 sleep 2 for interval fail_loc=0x018c fail_val=1211180 total: 4 open/close in 0.08 seconds: 51.92 ops/second pdsh@oleg146-client: oleg146-server: ssh exited with exit code 1 fail_loc=0 mdd.lustre-MDT0000.changelog_min_gc_interval=3600 mdd.lustre-MDT0000.changelog_gc=1 lustre-MDT0001: clear the changelog for cl24-user2 of all records lustre-MDT0001: Deregistered changelog user #24 lustre-MDT0000: clear the changelog for cl4-user2 of all records lustre-MDT0000: Deregistered changelog user #4 lustre-MDT0001: clear the changelog for cl23-user1 of all records lustre-MDT0001: Deregistered changelog user #23 lustre-MDT0000: changelog user 'cl3-user1' not found PASS 160t (31s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 160u: changelog rename record type name and sname strings are correct ========================================================== 22:47:06 (1713408426) mdd.lustre-MDT0000.changelog_mask=+hsm mdd.lustre-MDT0001.changelog_mask=+hsm Registered 2 changelog users: 'cl25 cl5' creating simple directory tree creating rename/hw file creating very long named file move rename/hw to rename/a/a.hw lustre-MDT0001: clear the changelog for cl25 of all records lustre-MDT0001: Deregistered changelog user #25 lustre-MDT0000: clear the changelog for cl5 of all records lustre-MDT0000: Deregistered changelog user #5 PASS 160u (8s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 161a: link ea sanity ====================== 22:47:16 (1713408436) striped dir -i1 -c1 -H all_char /mnt/lustre/d161a.sanity striped dir -i1 -c1 -H all_char /mnt/lustre/d161a.sanity/foo1 striped dir -i1 -c1 -H crush /mnt/lustre/d161a.sanity/foo2 total: 1000 link in 3.97 seconds: 251.90 ops/second 74/1000 links in link EA - unlinked 0 (time 1713408445 ; total 0 ; last 0) total: 1000 unlinks in 4 seconds: 250.000000 unlinks/second PASS 161a (14s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 161b: link ea sanity under remote directory ========================================================== 22:47:32 (1713408452) total: 1000 link in 4.02 seconds: 248.51 ops/second 80/1000 links in link EA - unlinked 0 (time 1713408460 ; total 0 ; last 0) total: 1000 unlinks in 4 seconds: 250.000000 unlinks/second PASS 161b (14s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 161c: check CL_RENME[UNLINK] changelog record flags ========================================================== 22:47:48 (1713408468) mdd.lustre-MDT0000.changelog_mask=+hsm mdd.lustre-MDT0001.changelog_mask=+hsm Registered 2 changelog users: 'cl26 cl6' striped dir -i1 -c2 -H crush /mnt/lustre/d161c.sanity lustre-MDT0001.500000045 08RENME 02:47:51.194903053 2024.04.18 0x1 t=[0x240001b72:0x14:0x0] j=mv.0 ef=0xf u=0:0 nid=192.168.201.46@tcp p=[0x240001b72:0x13:0x0] bar_161c s=[0x200002342:0x7f6f:0x0] sp=[0x240001b72:0x13:0x0] foo_161c lustre-MDT0000: clear the changelog for cl6 of all records lustre-MDT0001: clear the changelog for cl26 of all records rename overwrite target with nlink = 1, changelog flags=0x1 lustre-MDT0000.500097783 08RENME 02:47:51.534441034 2024.04.18 0x0 t=[0x200002342:0x7f6f:0x0] j=mv.0 ef=0xf u=0:0 nid=192.168.201.46@tcp p=[0x240001b72:0x13:0x0] bar_161c s=[0x200002342:0x7f70:0x0] sp=[0x240001b72:0x13:0x0] foo_161c lustre-MDT0000: clear the changelog for cl6 of all records lustre-MDT0001: clear the changelog for cl26 of all records rename overwrite a target having nlink > 1, changelog record has flags of 0x0 lustre-MDT0000.500097786 08RENME 02:47:51.791786735 2024.04.18 0x0 t=[0:0x0:0x0] j=mv.0 ef=0xf u=0:0 nid=192.168.201.46@tcp p=[0x240001b72:0x13:0x0] foo2_161c s=[0x200002342:0x7f71:0x0] sp=[0x240001b72:0x13:0x0] foo_161c lustre-MDT0000: clear the changelog for cl6 of all records lustre-MDT0001: clear the changelog for cl26 of all records rename doesn't overwrite a target, changelog record has flags of 0x0 lustre-MDT0000.500097787 06UNLNK 02:47:52.005076697 2024.04.18 0x1 t=[0x200002342:0x7f71:0x0] j=rm.0 ef=0xf u=0:0 nid=192.168.201.46@tcp p=[0x240001b72:0x13:0x0] foo2_161c lustre-MDT0000: clear the changelog for cl6 of all records lustre-MDT0001: clear the changelog for cl26 of all records unlink a file having nlink = 1, changelog record has flags of 0x1 lustre-MDT0000.500097788 06UNLNK 02:47:52.225880035 2024.04.18 0x1 t=[0x200002342:0x7f6f:0x0] j=ln.0 ef=0xf u=0:0 nid=192.168.201.46@tcp p=[0x240001b72:0x13:0x0] foobar_161c lustre-MDT0000.500097790 06UNLNK 02:47:52.245753607 2024.04.18 0x0 t=[0x200002342:0x7f70:0x0] j=rm.0 ef=0xf u=0:0 nid=192.168.201.46@tcp p=[0x240001b72:0x13:0x0] foobar_161c lustre-MDT0000: clear the changelog for cl6 of all records lustre-MDT0001: clear the changelog for cl26 of all records unlink a file having nlink > 1, changelog record flags '0x0' lustre-MDT0001: clear the changelog for cl26 of all records lustre-MDT0001: Deregistered changelog user #26 lustre-MDT0000: clear the changelog for cl6 of all records lustre-MDT0000: Deregistered changelog user #6 PASS 161c (8s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 161d: create with concurrent .lustre/fid access ========================================================== 22:47:58 (1713408478) mdd.lustre-MDT0000.changelog_mask=+hsm mdd.lustre-MDT0001.changelog_mask=+hsm Registered 2 changelog users: 'cl27 cl7' fail_loc=0x8000140c fail_val=5 PID TTY TIME CMD 21740 pts/0 00:00:00 bash fail_loc=0 lustre-MDT0001: clear the changelog for cl27 of all records lustre-MDT0001: Deregistered changelog user #27 lustre-MDT0000: clear the changelog for cl7 of all records lustre-MDT0000: Deregistered changelog user #7 PASS 161d (9s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 162a: path lookup sanity ================== 22:48:09 (1713408489) striped dir -i0 -c1 -H all_char /mnt/lustre/d162a.sanity/d2 striped dir -i0 -c1 -H all_char /mnt/lustre/d162a.sanity/d2/a/b/c striped dir -i0 -c1 -H crush2 /mnt/lustre/d162a.sanity/d2/p/q/r FID '0x200002342:0x7f74:0x0' resolves to path 'd162a.sanity/d2/f162a.sanity' as expected FID '0x200002342:0x7f7d:0x0' resolves to path 'd162a.sanity/d2/p/q/r/slink' as expected FID '0x200002342:0x7f7e:0x0' resolves to path 'd162a.sanity/d2/p/q/r/slink.wrong' as expected FID '0x200002342:0x7f74:0x0' resolves to path 'd162a.sanity/d2/a/b/c/new_file' as expected FID '0x200002342:0x7f74:0x0' resolves to path '/mnt/lustre/d162a.sanity/d2/p/q/r/hlink' as expected FID '0x200002342:0x7f74:0x0' resolves to path 'd162a.sanity/d2/a/b/c/new_file' as expected PASS 162a (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 162b: striped directory path lookup sanity ========================================================== 22:48:15 (1713408495) stat: cannot stat '/mnt/lustre/.lustre/fid/[0x240001b73:0x4:0x0]': Operation not permitted FID '0x200002342:0x7f80:0x0' resolves to path 'd162b.sanity/striped_dir/f0' as expected FID '0x200002342:0x7f83:0x0' resolves to path 'd162b.sanity/striped_dir/d0' as expected FID '0x240001b72:0x19:0x0' resolves to path 'd162b.sanity/striped_dir/f1' as expected FID '0x240001b72:0x1b:0x0' resolves to path 'd162b.sanity/striped_dir/d1' as expected FID '0x200002342:0x7f81:0x0' resolves to path 'd162b.sanity/striped_dir/f2' as expected FID '0x200002342:0x7f84:0x0' resolves to path 'd162b.sanity/striped_dir/d2' as expected FID '0x240001b72:0x1a:0x0' resolves to path 'd162b.sanity/striped_dir/f3' as expected FID '0x240001b72:0x1c:0x0' resolves to path 'd162b.sanity/striped_dir/d3' as expected FID '0x200002342:0x7f82:0x0' resolves to path 'd162b.sanity/striped_dir/f4' as expected FID '0x200002342:0x7f85:0x0' resolves to path 'd162b.sanity/striped_dir/d4' as expected PASS 162b (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 162c: fid2path works with paths 100 or more directories deep ========================================================== 22:48:20 (1713408500) striped dir -i0 -c2 -H fnv_1a_64 /mnt/lustre/d162c.sanity.local striped dir -i0 -c2 -H all_char /mnt/lustre/d162c.sanity.remote FID '0x240001b72:0x1d:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0' as expected striped dir -i0 -c2 -H crush2 /mnt/lustre/d162c.sanity.remote/0 FID '0x200002342:0x7f88:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0' as expected FID '0x200002342:0x7f89:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1' as expected striped dir -i0 -c2 -H crush /mnt/lustre/d162c.sanity.remote/0/1 FID '0x200002342:0x7f8a:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1' as expected FID '0x200002342:0x7f8b:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2' as expected striped dir -i0 -c2 -H crush2 /mnt/lustre/d162c.sanity.remote/0/1/2 FID '0x200002342:0x7f8c:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2' as expected FID '0x200002342:0x7f8d:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3' as expected striped dir -i0 -c2 -H fnv_1a_64 /mnt/lustre/d162c.sanity.remote/0/1/2/3 FID '0x200002342:0x7f8e:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3' as expected FID '0x200002342:0x7f8f:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4' as expected striped dir -i0 -c2 -H all_char /mnt/lustre/d162c.sanity.remote/0/1/2/3/4 FID '0x200002342:0x7f90:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4' as expected FID '0x200002342:0x7f91:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5' as expected striped dir -i0 -c2 -H fnv_1a_64 /mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5 FID '0x200002342:0x7f92:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5' as expected FID '0x200002342:0x7f93:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6' as expected striped dir -i0 -c2 -H all_char /mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6 FID '0x200002342:0x7f94:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6' as expected FID '0x200002342:0x7f95:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7' as expected striped dir -i0 -c2 -H crush /mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7 FID '0x200002342:0x7f96:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7' as expected FID '0x200002342:0x7f97:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8' as expected striped dir -i0 -c2 -H fnv_1a_64 /mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8 FID '0x200002342:0x7f98:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8' as expected FID '0x200002342:0x7f99:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9' as expected striped dir -i0 -c2 -H crush2 /mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9 FID '0x200002342:0x7f9a:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9' as expected FID '0x200002342:0x7f9b:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10' as expected striped dir -i0 -c2 -H crush2 /mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10 FID '0x200002342:0x7f9c:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10' as expected FID '0x200002342:0x7f9d:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11' as expected striped dir -i0 -c2 -H crush /mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11 FID '0x200002342:0x7f9e:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11' as expected FID '0x200002342:0x7f9f:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12' as expected striped dir -i0 -c2 -H crush /mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12 FID '0x200002342:0x7fa0:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12' as expected FID '0x200002342:0x7fa1:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13' as expected striped dir -i0 -c2 -H crush2 /mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13 FID '0x200002342:0x7fa2:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13' as expected FID '0x200002342:0x7fa3:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14' as expected striped dir -i0 -c2 -H crush /mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14 FID '0x200002342:0x7fa4:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14' as expected FID '0x200002342:0x7fa5:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15' as expected striped dir -i0 -c2 -H all_char /mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15 FID '0x200002342:0x7fa6:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15' as expected FID '0x200002342:0x7fa7:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16' as expected striped dir -i0 -c2 -H fnv_1a_64 /mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16 FID '0x200002342:0x7fa8:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16' as expected FID '0x200002342:0x7fa9:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17' as expected striped dir -i0 -c2 -H crush /mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17 FID '0x200002342:0x7faa:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17' as expected FID '0x200002342:0x7fab:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18' as expected striped dir -i0 -c2 -H all_char /mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18 FID '0x200002342:0x7fac:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18' as expected FID '0x200002342:0x7fad:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19' as expected striped dir -i0 -c2 -H all_char /mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19 FID '0x200002342:0x7fae:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19' as expected FID '0x200002342:0x7faf:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20' as expected striped dir -i0 -c2 -H all_char /mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20 FID '0x200002342:0x7fb0:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20' as expected FID '0x200002342:0x7fb1:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21' as expected striped dir -i0 -c2 -H fnv_1a_64 /mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21 FID '0x200002342:0x7fb2:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21' as expected FID '0x200002342:0x7fb3:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22' as expected striped dir -i0 -c2 -H crush2 /mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22 FID '0x200002342:0x7fb4:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22' as expected FID '0x200002342:0x7fb5:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23' as expected striped dir -i0 -c2 -H fnv_1a_64 /mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23 FID '0x200002342:0x7fb6:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23' as expected FID '0x200002342:0x7fb7:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24' as expected striped dir -i0 -c2 -H crush2 /mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24 FID '0x200002342:0x7fb8:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24' as expected FID '0x200002342:0x7fb9:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25' as expected striped dir -i0 -c2 -H crush2 /mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25 FID '0x200002342:0x7fba:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25' as expected FID '0x200002342:0x7fbb:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26' as expected striped dir -i0 -c2 -H all_char /mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26 FID '0x200002342:0x7fbc:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26' as expected FID '0x200002342:0x7fbd:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27' as expected striped dir -i0 -c2 -H fnv_1a_64 /mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27 FID '0x200002342:0x7fbe:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27' as expected FID '0x200002342:0x7fbf:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28' as expected striped dir -i0 -c2 -H crush2 /mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28 FID '0x200002342:0x7fc0:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28' as expected FID '0x200002342:0x7fc1:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29' as expected striped dir -i0 -c2 -H all_char /mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29 FID '0x200002342:0x7fc2:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29' as expected FID '0x200002342:0x7fc3:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30' as expected striped dir -i0 -c2 -H crush /mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30 FID '0x200002342:0x7fc4:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30' as expected FID '0x200002342:0x7fc5:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31' as expected striped dir -i0 -c2 -H fnv_1a_64 /mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31 FID '0x200002342:0x7fc6:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31' as expected FID '0x200002342:0x7fc7:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32' as expected striped dir -i0 -c2 -H all_char /mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32 FID '0x200002342:0x7fc8:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32' as expected FID '0x200002342:0x7fc9:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33' as expected striped dir -i0 -c2 -H crush2 /mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33 FID '0x200002342:0x7fca:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33' as expected FID '0x200002342:0x7fcb:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34' as expected striped dir -i0 -c2 -H crush2 /mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34 FID '0x200002342:0x7fcc:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34' as expected FID '0x200002342:0x7fcd:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35' as expected striped dir -i0 -c2 -H crush2 /mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35 FID '0x200002342:0x7fce:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35' as expected FID '0x200002342:0x7fcf:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36' as expected striped dir -i0 -c2 -H crush /mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36 FID '0x200002342:0x7fd0:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36' as expected FID '0x200002342:0x7fd1:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37' as expected striped dir -i0 -c2 -H crush /mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37 FID '0x200002342:0x7fd2:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37' as expected FID '0x200002342:0x7fd3:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38' as expected striped dir -i0 -c2 -H fnv_1a_64 /mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38 FID '0x200002342:0x7fd4:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38' as expected FID '0x200002342:0x7fd5:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39' as expected striped dir -i0 -c2 -H fnv_1a_64 /mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39 FID '0x200002342:0x7fd6:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39' as expected FID '0x200002342:0x7fd7:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40' as expected striped dir -i0 -c2 -H all_char /mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40 FID '0x200002342:0x7fd8:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40' as expected FID '0x200002342:0x7fd9:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41' as expected striped dir -i0 -c2 -H fnv_1a_64 /mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41 FID '0x200002342:0x7fda:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41' as expected FID '0x200002342:0x7fdb:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42' as expected striped dir -i0 -c2 -H crush2 /mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42 FID '0x200002342:0x7fdc:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42' as expected FID '0x200002342:0x7fdd:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43' as expected striped dir -i0 -c2 -H crush /mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43 FID '0x200002342:0x7fde:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43' as expected FID '0x200002342:0x7fdf:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44' as expected striped dir -i0 -c2 -H fnv_1a_64 /mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44 FID '0x200002342:0x7fe0:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44' as expected FID '0x200002342:0x7fe1:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45' as expected striped dir -i0 -c2 -H all_char /mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45 FID '0x200002342:0x7fe2:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45' as expected FID '0x200002342:0x7fe3:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46' as expected striped dir -i0 -c2 -H all_char /mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46 FID '0x200002342:0x7fe4:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46' as expected FID '0x200002342:0x7fe5:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47' as expected striped dir -i0 -c2 -H fnv_1a_64 /mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47 FID '0x200002342:0x7fe6:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47' as expected FID '0x200002342:0x7fe7:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48' as expected striped dir -i0 -c2 -H crush2 /mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48 FID '0x200002342:0x7fe8:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48' as expected FID '0x200002342:0x7fe9:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49' as expected striped dir -i0 -c2 -H all_char /mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49 FID '0x200002342:0x7fea:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49' as expected FID '0x200002342:0x7feb:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50' as expected striped dir -i0 -c2 -H fnv_1a_64 /mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50 FID '0x200002342:0x7fec:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50' as expected FID '0x200002342:0x7fed:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51' as expected striped dir -i0 -c2 -H all_char /mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51 FID '0x200002342:0x7fee:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51' as expected FID '0x200002342:0x7fef:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52' as expected striped dir -i0 -c2 -H fnv_1a_64 /mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52 FID '0x200002342:0x7ff0:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52' as expected FID '0x200002342:0x7ff1:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53' as expected striped dir -i0 -c2 -H crush2 /mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53 FID '0x200002342:0x7ff2:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53' as expected FID '0x200002342:0x7ff3:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54' as expected striped dir -i0 -c2 -H crush2 /mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54 FID '0x200002342:0x7ff4:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54' as expected FID '0x200002342:0x7ff5:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55' as expected striped dir -i0 -c2 -H crush2 /mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55 FID '0x200002342:0x7ff6:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55' as expected FID '0x200002342:0x7ff7:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56' as expected striped dir -i0 -c2 -H fnv_1a_64 /mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56 FID '0x200002342:0x7ff8:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56' as expected FID '0x200002342:0x7ff9:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57' as expected striped dir -i0 -c2 -H crush2 /mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57 FID '0x200002342:0x7ffa:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57' as expected FID '0x200002342:0x7ffb:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58' as expected striped dir -i0 -c2 -H crush2 /mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58 FID '0x200002342:0x7ffc:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58' as expected FID '0x200002342:0x7ffd:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59' as expected striped dir -i0 -c2 -H fnv_1a_64 /mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59 FID '0x200002342:0x7ffe:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59' as expected FID '0x200002342:0x7fff:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60' as expected striped dir -i0 -c2 -H fnv_1a_64 /mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60 FID '0x200002342:0x8000:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60' as expected FID '0x200002342:0x8001:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61' as expected striped dir -i0 -c2 -H all_char /mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61 FID '0x200002342:0x8002:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61' as expected FID '0x200002342:0x8003:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62' as expected striped dir -i0 -c2 -H all_char /mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62 FID '0x200002342:0x8004:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62' as expected FID '0x200002342:0x8005:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63' as expected striped dir -i0 -c2 -H crush2 /mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63 FID '0x200002342:0x8006:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63' as expected FID '0x200002342:0x8007:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64' as expected striped dir -i0 -c2 -H all_char /mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64 FID '0x200002342:0x8008:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64' as expected FID '0x200002342:0x8009:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65' as expected striped dir -i0 -c2 -H crush /mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65 FID '0x200002342:0x800a:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65' as expected FID '0x200002342:0x800b:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66' as expected striped dir -i0 -c2 -H crush /mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66 FID '0x200002342:0x800c:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66' as expected FID '0x200002342:0x800d:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67' as expected striped dir -i0 -c2 -H crush /mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67 FID '0x200002342:0x800e:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67' as expected FID '0x200002342:0x800f:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68' as expected striped dir -i0 -c2 -H crush2 /mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68 FID '0x200002342:0x8010:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68' as expected FID '0x200002342:0x8011:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69' as expected striped dir -i0 -c2 -H all_char /mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69 FID '0x200002342:0x8012:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69' as expected FID '0x200002342:0x8013:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70' as expected striped dir -i0 -c2 -H crush /mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70 FID '0x200002342:0x8014:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70' as expected FID '0x200002342:0x8015:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71' as expected striped dir -i0 -c2 -H crush /mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71 FID '0x200002342:0x8016:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71' as expected FID '0x200002342:0x8017:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72' as expected striped dir -i0 -c2 -H all_char /mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72 FID '0x200002342:0x8018:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72' as expected FID '0x200002342:0x8019:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73' as expected striped dir -i0 -c2 -H all_char /mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73 FID '0x200002342:0x801a:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73' as expected FID '0x200002342:0x801b:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74' as expected striped dir -i0 -c2 -H all_char /mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74 FID '0x200002342:0x801c:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74' as expected FID '0x200002342:0x801d:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75' as expected striped dir -i0 -c2 -H fnv_1a_64 /mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75 FID '0x200002342:0x801e:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75' as expected FID '0x200002342:0x801f:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76' as expected striped dir -i0 -c2 -H all_char /mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76 FID '0x200002342:0x8020:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76' as expected FID '0x200002342:0x8021:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77' as expected striped dir -i0 -c2 -H fnv_1a_64 /mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77 FID '0x200002342:0x8022:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77' as expected FID '0x200002342:0x8023:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78' as expected striped dir -i0 -c2 -H crush /mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78 FID '0x200002342:0x8024:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78' as expected FID '0x200002342:0x8025:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78/79' as expected striped dir -i0 -c2 -H crush2 /mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78/79 FID '0x200002342:0x8026:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78/79' as expected FID '0x200002342:0x8027:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78/79/80' as expected striped dir -i0 -c2 -H crush2 /mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78/79/80 FID '0x200002342:0x8028:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78/79/80' as expected FID '0x200002342:0x8029:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78/79/80/81' as expected striped dir -i0 -c2 -H fnv_1a_64 /mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78/79/80/81 FID '0x200002342:0x802a:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78/79/80/81' as expected FID '0x200002342:0x802b:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78/79/80/81/82' as expected striped dir -i0 -c2 -H crush2 /mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78/79/80/81/82 FID '0x200002342:0x802c:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78/79/80/81/82' as expected FID '0x200002342:0x802d:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78/79/80/81/82/83' as expected striped dir -i0 -c2 -H all_char /mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78/79/80/81/82/83 FID '0x200002342:0x802e:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78/79/80/81/82/83' as expected FID '0x200002342:0x802f:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78/79/80/81/82/83/84' as expected striped dir -i0 -c2 -H crush2 /mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78/79/80/81/82/83/84 FID '0x200002342:0x8030:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78/79/80/81/82/83/84' as expected FID '0x200002342:0x8031:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78/79/80/81/82/83/84/85' as expected striped dir -i0 -c2 -H crush2 /mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78/79/80/81/82/83/84/85 FID '0x200002342:0x8032:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78/79/80/81/82/83/84/85' as expected FID '0x200002342:0x8033:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78/79/80/81/82/83/84/85/86' as expected striped dir -i0 -c2 -H crush2 /mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78/79/80/81/82/83/84/85/86 FID '0x200002342:0x8034:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78/79/80/81/82/83/84/85/86' as expected FID '0x200002342:0x8035:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78/79/80/81/82/83/84/85/86/87' as expected striped dir -i0 -c2 -H fnv_1a_64 /mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78/79/80/81/82/83/84/85/86/87 FID '0x200002342:0x8036:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78/79/80/81/82/83/84/85/86/87' as expected FID '0x200002342:0x8037:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78/79/80/81/82/83/84/85/86/87/88' as expected striped dir -i0 -c2 -H crush2 /mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78/79/80/81/82/83/84/85/86/87/88 FID '0x200002342:0x8038:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78/79/80/81/82/83/84/85/86/87/88' as expected FID '0x200002342:0x8039:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78/79/80/81/82/83/84/85/86/87/88/89' as expected striped dir -i0 -c2 -H crush2 /mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78/79/80/81/82/83/84/85/86/87/88/89 FID '0x200002342:0x803a:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78/79/80/81/82/83/84/85/86/87/88/89' as expected FID '0x200002342:0x803b:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78/79/80/81/82/83/84/85/86/87/88/89/90' as expected striped dir -i0 -c2 -H all_char /mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78/79/80/81/82/83/84/85/86/87/88/89/90 FID '0x200002342:0x803c:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78/79/80/81/82/83/84/85/86/87/88/89/90' as expected FID '0x200002342:0x803d:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78/79/80/81/82/83/84/85/86/87/88/89/90/91' as expected striped dir -i0 -c2 -H crush2 /mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78/79/80/81/82/83/84/85/86/87/88/89/90/91 FID '0x200002342:0x803e:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78/79/80/81/82/83/84/85/86/87/88/89/90/91' as expected FID '0x200002342:0x803f:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78/79/80/81/82/83/84/85/86/87/88/89/90/91/92' as expected striped dir -i0 -c2 -H crush /mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78/79/80/81/82/83/84/85/86/87/88/89/90/91/92 FID '0x200002342:0x8040:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78/79/80/81/82/83/84/85/86/87/88/89/90/91/92' as expected FID '0x200002342:0x8041:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78/79/80/81/82/83/84/85/86/87/88/89/90/91/92/93' as expected striped dir -i0 -c2 -H crush /mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78/79/80/81/82/83/84/85/86/87/88/89/90/91/92/93 FID '0x200002342:0x8042:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78/79/80/81/82/83/84/85/86/87/88/89/90/91/92/93' as expected FID '0x200002342:0x8043:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78/79/80/81/82/83/84/85/86/87/88/89/90/91/92/93/94' as expected striped dir -i0 -c2 -H all_char /mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78/79/80/81/82/83/84/85/86/87/88/89/90/91/92/93/94 FID '0x200002342:0x8044:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78/79/80/81/82/83/84/85/86/87/88/89/90/91/92/93/94' as expected FID '0x200002342:0x8045:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78/79/80/81/82/83/84/85/86/87/88/89/90/91/92/93/94/95' as expected striped dir -i0 -c2 -H crush2 /mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78/79/80/81/82/83/84/85/86/87/88/89/90/91/92/93/94/95 FID '0x200002342:0x8046:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78/79/80/81/82/83/84/85/86/87/88/89/90/91/92/93/94/95' as expected FID '0x200002342:0x8047:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78/79/80/81/82/83/84/85/86/87/88/89/90/91/92/93/94/95/96' as expected striped dir -i0 -c2 -H all_char /mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78/79/80/81/82/83/84/85/86/87/88/89/90/91/92/93/94/95/96 FID '0x200002342:0x8048:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78/79/80/81/82/83/84/85/86/87/88/89/90/91/92/93/94/95/96' as expected FID '0x200002342:0x8049:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78/79/80/81/82/83/84/85/86/87/88/89/90/91/92/93/94/95/96/97' as expected striped dir -i0 -c2 -H fnv_1a_64 /mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78/79/80/81/82/83/84/85/86/87/88/89/90/91/92/93/94/95/96/97 FID '0x200002342:0x804a:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78/79/80/81/82/83/84/85/86/87/88/89/90/91/92/93/94/95/96/97' as expected FID '0x200002342:0x804b:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78/79/80/81/82/83/84/85/86/87/88/89/90/91/92/93/94/95/96/97/98' as expected striped dir -i0 -c2 -H fnv_1a_64 /mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78/79/80/81/82/83/84/85/86/87/88/89/90/91/92/93/94/95/96/97/98 FID '0x200002342:0x804c:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78/79/80/81/82/83/84/85/86/87/88/89/90/91/92/93/94/95/96/97/98' as expected FID '0x200002342:0x804d:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78/79/80/81/82/83/84/85/86/87/88/89/90/91/92/93/94/95/96/97/98/99' as expected striped dir -i0 -c2 -H all_char /mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78/79/80/81/82/83/84/85/86/87/88/89/90/91/92/93/94/95/96/97/98/99 FID '0x200002342:0x804e:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78/79/80/81/82/83/84/85/86/87/88/89/90/91/92/93/94/95/96/97/98/99' as expected FID '0x200002342:0x804f:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78/79/80/81/82/83/84/85/86/87/88/89/90/91/92/93/94/95/96/97/98/99/100' as expected striped dir -i0 -c2 -H crush2 /mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78/79/80/81/82/83/84/85/86/87/88/89/90/91/92/93/94/95/96/97/98/99/100 FID '0x200002342:0x8050:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78/79/80/81/82/83/84/85/86/87/88/89/90/91/92/93/94/95/96/97/98/99/100' as expected FID '0x200002342:0x8051:0x0' resolves to path '/mnt/lustre/d162c.sanity.local/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78/79/80/81/82/83/84/85/86/87/88/89/90/91/92/93/94/95/96/97/98/99/100/101' as expected striped dir -i0 -c2 -H crush /mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78/79/80/81/82/83/84/85/86/87/88/89/90/91/92/93/94/95/96/97/98/99/100/101 FID '0x200002342:0x8052:0x0' resolves to path '/mnt/lustre/d162c.sanity.remote/0/1/2/3/4/5/6/7/8/9/10/11/12/13/14/15/16/17/18/19/20/21/22/23/24/25/26/27/28/29/30/31/32/33/34/35/36/37/38/39/40/41/42/43/44/45/46/47/48/49/50/51/52/53/54/55/56/57/58/59/60/61/62/63/64/65/66/67/68/69/70/71/72/73/74/75/76/77/78/79/80/81/82/83/84/85/86/87/88/89/90/91/92/93/94/95/96/97/98/99/100/101' as expected PASS 162c (36s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 165a: ofd access log discovery ============ 22:48:58 (1713408538) obdfilter.lustre-OST0000.access_log_size=4096 - name: lustre-OST0000 version: 0x10000 type: 0x1 log_size: 4096 entry_size: 64 Stopping /mnt/lustre-ost1 (opts:) on oleg146-server pdsh@oleg146-client: oleg146-server: ssh exited with exit code 1 Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg146-server: oleg146-server.virtnet: executing set_default_debug all all pdsh@oleg146-client: oleg146-server: ssh exited with exit code 1 Started lustre-OST0000 PASS 165a (16s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 165b: ofd access log entries are produced and consumed ========================================================== 22:49:17 (1713408557) obdfilter.lustre-OST0000.access_log_size=4096 - name: lustre-OST0000 version: 0x10000 type: 0x1 log_size: 4096 entry_size: 64 entry = '- TRACE alr_log_entry lustre-OST0000 [0x200002342:0x8053:0x0] 0 1048576 1713408563 1048576 1 w' entry = '- TRACE alr_log_entry lustre-OST0000 [0x200002342:0x8053:0x0] 0 524288 1713408573 524288 1 r' pdsh@oleg146-client: oleg146-server: ssh exited with exit code 1 Stopping /mnt/lustre-ost1 (opts:) on oleg146-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg146-server: oleg146-server.virtnet: executing set_default_debug all all pdsh@oleg146-client: oleg146-server: ssh exited with exit code 1 Started lustre-OST0000 PASS 165b (30s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 165c: full ofd access logs do not block IOs ========================================================== 22:49:50 (1713408590) striped dir -i1 -c2 -H fnv_1a_64 /mnt/lustre/d165c.sanity obdfilter.lustre-OST0000.access_log_size=4096 - unlinked 0 (time 1713408600 ; total 0 ; last 0) total: 128 unlinks in 0 seconds: inf unlinks/second pdsh@oleg146-client: oleg146-server: ssh exited with exit code 1 Stopping /mnt/lustre-ost1 (opts:) on oleg146-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg146-server: oleg146-server.virtnet: executing set_default_debug all all pdsh@oleg146-client: oleg146-server: ssh exited with exit code 1 Started lustre-OST0000 PASS 165c (19s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 165d: ofd_access_log mask works =========== 22:50:11 (1713408611) striped dir -i1 -c2 -H all_char /mnt/lustre/d165d.sanity obdfilter.lustre-OST0000.access_log_size=4096 obdfilter.lustre-OST0000.access_log_mask=rw obdfilter.lustre-OST0000.access_log_mask=r obdfilter.lustre-OST0000.access_log_mask=w obdfilter.lustre-OST0000.access_log_mask=0 pdsh@oleg146-client: oleg146-server: ssh exited with exit code 1 Stopping /mnt/lustre-ost1 (opts:) on oleg146-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg146-server: oleg146-server.virtnet: executing set_default_debug all all pdsh@oleg146-client: oleg146-server: ssh exited with exit code 1 Started lustre-OST0000 PASS 165d (30s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 165e: ofd_access_log MDT index filter works ========================================================== 22:50:44 (1713408644) striped dir -i0 -c1 -H crush2 /mnt/lustre/d165e.sanity-0 striped dir -i1 -c1 -H fnv_1a_64 /mnt/lustre/d165e.sanity-1 obdfilter.lustre-OST0000.access_log_size=4096 pdsh@oleg146-client: oleg146-server: ssh exited with exit code 1 Stopping /mnt/lustre-ost1 (opts:) on oleg146-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg146-server: oleg146-server.virtnet: executing set_default_debug all all pdsh@oleg146-client: oleg146-server: ssh exited with exit code 1 Started lustre-OST0000 PASS 165e (19s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 165f: ofd_access_log_reader --exit-on-close works ========================================================== 22:51:04 (1713408664) obdfilter.lustre-OST0000.access_log_size=4096 Stopping /mnt/lustre-ost1 (opts:) on oleg146-server pdsh@oleg146-client: oleg146-server: ssh exited with exit code 1 Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg146-server: oleg146-server.virtnet: executing set_default_debug all all pdsh@oleg146-client: oleg146-server: ssh exited with exit code 1 Started lustre-OST0000 PASS 165f (15s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 169: parallel read and truncate should not deadlock ========================================================== 22:51:22 (1713408682) creating a 10 Mb file starting reads truncating the file 2560+0 records in 2560+0 records out 10485760 bytes (10 MB) copied, 0.318696 s, 32.9 MB/s killing dd wait until dd is finished removing the temporary file PASS 169 (15s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 170: test lctl df to handle corrupted log =============================================================================== 22:51:40 (1713408700) PASS 170 (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 171: test libcfs_debug_dumplog_thread stuck in do_exit() ================================================================ 22:51:45 (1713408705) fail_loc=0x50e fail_val=3000 multiop /mnt/lustre/f171.sanity vO_s TMPPIPE=/tmp/multiop_open_wait_pipe.7509 fail_loc=0 PASS 171 (5s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 172: manual device removal with lctl cleanup/detach ================================================================ 22:51:53 (1713408713) fail_loc=0x60e Starting client: oleg146-client.virtnet: -o user_xattr,flock oleg146-server@tcp:/lustre /mnt/lustre PASS 172 (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 180a: test obdecho on osc ================= 22:51:59 (1713408719) SKIP: sanity test_180a obdecho on osc is no longer supported SKIP 180a (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 180b: test obdecho directly on obdfilter == 22:52:03 (1713408723) oleg146-server: oleg146-server.virtnet: executing load_module obdecho/obdecho New object id is 0x2 valid: 0x1100000000007bf atime: 0 mtime: 0 ctime: 0 size: 0 blocks: 0 mode: 0107666 uid: 0 gid: 0 projid: 0 data_version: 0 Print status every operation test_brw: writing 10x64 pages (obj 0x2, off 0): Wed Apr 17 22:52:06 2024 test_brw: write number 1 @ 2:0 for 262144 test_brw: write number 2 @ 2:262144 for 262144 test_brw: write number 3 @ 2:524288 for 262144 test_brw: write number 4 @ 2:786432 for 262144 test_brw: write number 5 @ 2:1048576 for 262144 test_brw: write number 6 @ 2:1310720 for 262144 test_brw: write number 7 @ 2:1572864 for 262144 test_brw: write number 8 @ 2:1835008 for 262144 test_brw: write number 9 @ 2:2097152 for 262144 test_brw: write number 10 @ 2:2359296 for 262144 test_brw: wrote 10x64 pages in 0.013s (194.175 MB/s): Wed Apr 17 22:52:06 2024 destroy: 1 objects destroy: #1 is object id 0x2 PASS 180b (7s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 180c: test huge bulk I/O size on obdfilter, don't LASSERT ========================================================== 22:52:12 (1713408732) oleg146-server: oleg146-server.virtnet: executing load_module obdecho/obdecho New object id is 0x3 valid: 0x1100000000007bf atime: 0 mtime: 0 ctime: 0 size: 0 blocks: 0 mode: 0107666 uid: 0 gid: 0 projid: 0 data_version: 0 Print status every operation test_brw: writing 10x16384 pages (obj 0x3, off 0): Wed Apr 17 22:52:16 2024 test_brw: write number 1 @ 3:0 for 67108864 test_brw: write number 2 @ 3:67108864 for 67108864 test_brw: write number 3 @ 3:134217728 for 67108864 test_brw: write number 4 @ 3:201326592 for 67108864 test_brw: write number 5 @ 3:268435456 for 67108864 test_brw: write number 6 @ 3:335544320 for 67108864 test_brw: write number 7 @ 3:402653184 for 67108864 test_brw: write number 8 @ 3:469762048 for 67108864 test_brw: write number 9 @ 3:536870912 for 67108864 test_brw: write number 10 @ 3:603979776 for 67108864 test_brw: wrote 10x16384 pages in 0.462s (1385.078 MB/s): Wed Apr 17 22:52:16 2024 destroy: 1 objects destroy: #1 is object id 0x3 PASS 180c (8s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 181: Test open-unlinked dir ================================================================================== 22:52:22 (1713408742) striped dir -i1 -c2 -H all_char /mnt/lustre/d181.sanity - open/close 2317 (time 1713408753.94 total 10.00 last 231.68) total: 4000 open/close in 17.22 seconds: 232.31 ops/second --------------e------- . multiop /mnt/lustre/d181.sanity vD_Sc TMPPIPE=/tmp/multiop_open_wait_pipe.7509 - unlinked 0 (time 1713408762 ; total 0 ; last 0) total: 4000 unlinks in 9 seconds: 444.444458 unlinks/second stat: cannot stat '/mnt/lustre/d181.sanity': No such file or directory PASS 181 (31s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 182a: Test parallel modify metadata operations from mdc ========================================================== 22:52:56 (1713408776) mdc.lustre-MDT0000-mdc-ffff8800a7bbc800.rpc_stats=clear mdc.lustre-MDT0001-mdc-ffff8800a7bbc800.rpc_stats=clear total: 1000 open/close in 1.90 seconds: 525.10 ops/second total: 1000 open/close in 1.96 seconds: 510.20 ops/second total: 1000 open/close in 1.97 seconds: 508.36 ops/second total: 1000 open/close in 1.98 seconds: 505.90 ops/second total: 1000 open/close in 2.00 seconds: 499.99 ops/second total: 1000 open/close in 2.00 seconds: 500.68 ops/second total: 1000 open/close in 2.04 seconds: 489.51 ops/second total: 1000 open/close in 2.06 seconds: 485.42 ops/second total: 1000 open/close in 2.10 seconds: 476.45 ops/second total: 1000 open/close in 2.11 seconds: 474.26 ops/second - unlinked 0 (time 1713408781 ; total 0 ; last 0) total: 1000 unlinks in 1 seconds: 1000.000000 unlinks/second - unlinked 0 (time 1713408781 ; total 0 ; last 0) total: 1000 unlinks in 1 seconds: 1000.000000 unlinks/second - unlinked 0 (time 1713408781 ; total 0 ; last 0) total: 1000 unlinks in 1 seconds: 1000.000000 unlinks/second - unlinked 0 (time 1713408781 ; total 0 ; last 0) total: 1000 unlinks in 1 seconds: 1000.000000 unlinks/second - unlinked 0 (time 1713408781 ; total 0 ; last 0) total: 1000 unlinks in 1 seconds: 1000.000000 unlinks/second - unlinked 0 (time 1713408781 ; total 0 ; last 0) total: 1000 unlinks in 1 seconds: 1000.000000 unlinks/second - unlinked 0 (time 1713408781 ; total 0 ; last 0) total: 1000 unlinks in 1 seconds: 1000.000000 unlinks/second - unlinked 0 (time 1713408781 ; total 0 ; last 0) total: 1000 unlinks in 1 seconds: 1000.000000 unlinks/second - unlinked 0 (time 1713408781 ; total 0 ; last 0) total: 1000 unlinks in 2 seconds: 500.000000 unlinks/second - unlinked 0 (time 1713408781 ; total 0 ; last 0) total: 1000 unlinks in 2 seconds: 500.000000 unlinks/second mdc.lustre-MDT0000-mdc-ffff8800a7bbc800.rpc_stats= snapshot_time: 1713408783.542750092 secs.nsecs start_time: 1713408776.895228020 secs.nsecs elapsed_time: 6.647522072 secs.nsecs modify_RPCs_in_flight: 0 modify rpcs in flight rpcs %% cum %% 0: 0 0 0 1: 278 1 1 2: 441 2 4 3: 999 6 11 4: 3986 26 38 5: 9301 61 100 read RPCs in flight: 0 write RPCs in flight: 0 pending write pages: 0 pending read pages: 0 read write pages per rpc rpcs % cum % | rpcs % cum % 1: 0 0 0 | 0 0 0 read write rpcs in flight rpcs % cum % | rpcs % cum % 1: 0 0 0 | 0 0 0 read write offset rpcs % cum % | rpcs % cum % 0: 0 0 0 | 0 0 0 mdc.lustre-MDT0001-mdc-ffff8800a7bbc800.rpc_stats= snapshot_time: 1713408783.542893105 secs.nsecs start_time: 1713408776.895690200 secs.nsecs elapsed_time: 6.647202905 secs.nsecs modify_RPCs_in_flight: 0 modify rpcs in flight rpcs %% cum %% 0: 0 0 0 1: 199 1 1 2: 431 2 4 3: 787 5 9 4: 4023 26 36 5: 9565 63 100 read RPCs in flight: 0 write RPCs in flight: 0 pending write pages: 0 pending read pages: 0 read write pages per rpc rpcs % cum % | rpcs % cum % 1: 0 0 0 | 0 0 0 read write rpcs in flight rpcs % cum % | rpcs % cum % 1: 0 0 0 | 0 0 0 read write offset rpcs % cum % | rpcs % cum % 0: 0 0 0 | 0 0 0 PASS 182a (10s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 182b: Test parallel modify metadata operations from osp ========================================================== 22:53:09 (1713408789) osp.lustre-MDT0000-osp-MDT0001.rpc_stats osp.lustre-MDT0001-osp-MDT0000.rpc_stats total: 10 mkdir in 0.19 seconds: 52.66 ops/second total: 1000 mkdir in 3.26 seconds: 306.98 ops/second total: 1000 mkdir in 3.25 seconds: 307.94 ops/second total: 1000 mkdir in 3.34 seconds: 298.96 ops/second total: 1000 mkdir in 3.38 seconds: 295.52 ops/second total: 1000 mkdir in 3.37 seconds: 296.69 ops/second total: 1000 mkdir in 3.42 seconds: 292.57 ops/second total: 1000 mkdir in 3.40 seconds: 294.14 ops/second total: 1000 mkdir in 3.48 seconds: 287.15 ops/second total: 1000 mkdir in 3.51 seconds: 284.56 ops/second total: 1000 mkdir in 3.68 seconds: 271.55 ops/second Time for file creation 5 sec for 8 parallel RPCs rmdir(/mnt/lustre/d182b.sanity/0/d-0) error: No such file or directory total: 0 unlinks in 0 seconds: -nan unlinks/second rmdir(/mnt/lustre/d182b.sanity/3/d-0) error: No such file or directory total: 0 unlinks in 0 seconds: -nan unlinks/second rmdir(/mnt/lustre/d182b.sanity/2/d-0) error: No such file or directory total: 0 unlinks in 0 seconds: -nan unlinks/second rmdir(/mnt/lustre/d182b.sanity/6/d-0) error: No such file or directory total: 0 unlinks in 0 seconds: -nan unlinks/second rmdir(/mnt/lustre/d182b.sanity/7/d-0) error: No such file or directory total: 0 unlinks in 0 seconds: -nan unlinks/second rmdir(/mnt/lustre/d182b.sanity/4/d-0) error: No such file or directory total: 0 unlinks in 0 seconds: -nan unlinks/second rmdir(/mnt/lustre/d182b.sanity/9/d-0) error: No such file or directory total: 0 unlinks in 0 seconds: -nan unlinks/second rmdir(/mnt/lustre/d182b.sanity/1/d-0) error: No such file or directory total: 0 unlinks in 0 seconds: -nan unlinks/second rmdir(/mnt/lustre/d182b.sanity/8/d-0) error: No such file or directory total: 0 unlinks in 0 seconds: -nan unlinks/second rmdir(/mnt/lustre/d182b.sanity/5/d-0) error: No such file or directory total: 0 unlinks in 0 seconds: -nan unlinks/second Time for file removal 2 sec for 8 parallel RPCs osp.lustre-MDT0001-osp-MDT0000.max_mod_rpcs_in_flight=1 total: 10 mkdir in 0.19 seconds: 53.74 ops/second total: 1000 mkdir in 3.34 seconds: 299.14 ops/second total: 1000 mkdir in 3.35 seconds: 298.54 ops/second total: 1000 mkdir in 3.43 seconds: 291.45 ops/second total: 1000 mkdir in 3.38 seconds: 296.20 ops/second total: 1000 mkdir in 3.39 seconds: 294.99 ops/second total: 1000 mkdir in 3.49 seconds: 286.88 ops/second total: 1000 mkdir in 3.52 seconds: 283.76 ops/second total: 1000 mkdir in 3.53 seconds: 283.34 ops/second total: 1000 mkdir in 3.57 seconds: 279.82 ops/second total: 1000 mkdir in 3.60 seconds: 278.02 ops/second Time for file creation 5 sec for 1 RPC sent at a time - unlinked 0 (time 1713408986 ; total 0 ; last 0) total: 1000 unlinks in 2 seconds: 500.000000 unlinks/second - unlinked 0 (time 1713408986 ; total 0 ; last 0) total: 1000 unlinks in 2 seconds: 500.000000 unlinks/second - unlinked 0 (time 1713408986 ; total 0 ; last 0) total: 1000 unlinks in 2 seconds: 500.000000 unlinks/second - unlinked 0 (time 1713408986 ; total 0 ; last 0) total: 1000 unlinks in 2 seconds: 500.000000 unlinks/second - unlinked 0 (time 1713408986 ; total 0 ; last 0) total: 1000 unlinks in 2 seconds: 500.000000 unlinks/second - unlinked 0 (time 1713408986 ; total 0 ; last 0) total: 1000 unlinks in 2 seconds: 500.000000 unlinks/second - unlinked 0 (time 1713408986 ; total 0 ; last 0) total: 1000 unlinks in 2 seconds: 500.000000 unlinks/second - unlinked 0 (time 1713408986 ; total 0 ; last 0) total: 1000 unlinks in 2 seconds: 500.000000 unlinks/second - unlinked 0 (time 1713408986 ; total 0 ; last 0) total: 1000 unlinks in 2 seconds: 500.000000 unlinks/second - unlinked 0 (time 1713408986 ; total 0 ; last 0) total: 1000 unlinks in 2 seconds: 500.000000 unlinks/second Time for file removal 3 sec for 1 RPC sent at a time osp.lustre-MDT0001-osp-MDT0000.max_mod_rpcs_in_flight=8 PASS 182b (201s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 183: No crash or request leak in case of strange dispositions ================================================================== 22:56:33 (1713408993) fail_loc=0x148 ls: cannot open directory /mnt/lustre/d183.sanity: No such file or directory cat: /mnt/lustre/d183.sanity/f183.sanity: No such file or directory fail_loc=0 touch: cannot touch '/mnt/lustre/d183.sanity/f183.sanity': No such file or directory PASS 183 (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 184a: Basic layout swap =================== 22:56:38 (1713408998) striped dir -i0 -c1 -H all_char /mnt/lustre/d184a.sanity/184a PASS 184a (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 184b: Forbidden layout swap (will generate errors) ========================================================== 22:56:45 (1713409005) lfs swap_layouts: error: cannot open '/mnt/lustre/d184b.sanity/184b/d1' for write: Is a directory (21) lfs swap_layouts: error: cannot open '/mnt/lustre/d184b.sanity/184b/d1' for write: Is a directory (21) running as uid/gid/euid/egid 500/500/500/500, groups: [/home/green/git/lustre-release/lustre/utils/lfs] [swap_layouts] [/mnt/lustre/d184b.sanity/184b/f1] [/mnt/lustre/d184b.sanity/184b/f2] lfs swap_layouts: error: cannot open '/mnt/lustre/d184b.sanity/184b/f1' for write: Permission denied (13) lfs swap_layouts: error: cannot swap layout between '/mnt/lustre/d184b.sanity/184b/f1' and '/mnt/lustre/d184b.sanity/184b/f3': Operation not permitted (1) PASS 184b (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 184c: Concurrent write and layout swap ==== 22:56:50 (1713409010) 46+0 records in 46+0 records out 48234496 bytes (48 MB) copied, 1.41024 s, 34.2 MB/s 37+0 records in 37+0 records out 38797312 bytes (39 MB) copied, 1.19109 s, 32.6 MB/s ref file size: ref1(48234496), ref2(38797312) 2944+0 records in 2944+0 records out 48234496 bytes (48 MB) copied, 2.91843 s, 16.5 MB/s Copied 1572864 bytes before swapping layout... PASS 184c (12s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 184d: allow stripeless layouts swap ======= 22:57:04 (1713409024) Succeed in opening file "/mnt/lustre/d184d.sanity/f184d.sanity-2"(flags=O_CREAT) Succeed in opening file "/mnt/lustre/d184d.sanity/f184d.sanity-3"(flags=O_CREAT) -c 1 -S 4194304 -L raid0 -i 0 -c 1 -S 4194304 -L raid0 -i 0 /mnt/lustre/d184d.sanity/f184d.sanity-1: trusted.lov: No such attribute PASS 184d (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 184e: Recreate layout after stripeless layout swaps ========================================================== 22:57:11 (1713409031) Succeed in opening file "/mnt/lustre/d184e.sanity/f184e.sanity-2"(flags=O_CREAT) Succeed in opening file "/mnt/lustre/d184e.sanity/f184e.sanity-3"(flags=O_CREAT) /mnt/lustre/d184e.sanity/f184e.sanity-1: trusted.lov: No such attribute PASS 184e (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 184f: IOC_MDC_GETFILEINFO for files with long names but no striping ========================================================== 22:57:16 (1713409036) error: bad stripe_count '0x6666' PASS 184f (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 185: Volatile file support ================ 22:57:19 (1713409039) Can't lstat /mnt/lustre/.lustre/fid/[0x240001b74:0x1b71:0x0]: No such file or directory multiop /mnt/lustre/d185.sanity vVw4096_c TMPPIPE=/tmp/multiop_open_wait_pipe.7509 /mnt/lustre/.lustre/fid/[0x240001b74:0x1b72:0x0] has type file OK PASS 185 (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 185a: Volatile file creation in .lustre/fid/ ========================================================== 22:57:24 (1713409044) /mnt/lustre/.lustre/fid/[0x200002b11:0x69a3:0x0] has type file OK Can't lstat /mnt/lustre/.lustre/fid/[0x200002b11:0x69a3:0x0]: No such file or directory /mnt/lustre/.lustre/fid/[0x240001b74:0x1b73:0x0] has type file OK Can't lstat /mnt/lustre/.lustre/fid/[0x240001b74:0x1b73:0x0]: No such file or directory PASS 185a (6s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 187a: Test data version change ============ 22:57:33 (1713409053) 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 0.366339 s, 28.6 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0881888 s, 11.9 MB/s PASS 187a (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 187b: Test data version change on volatile file ========================================================== 22:57:38 (1713409058) PASS 187b (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 200: OST pools ============================ 22:57:43 (1713409063) Creating new pool oleg146-server: Pool lustre.cea1 created Adding targets to pool oleg146-server: OST lustre-OST0000_UUID added to pool lustre.cea1 Waiting 90s for 'lustre-OST0000_UUID ' Setting pool on directory /mnt/lustre/d200.pools/dir_tst Checking pool on directory /mnt/lustre/d200.pools/dir_tst Checking pool on directory /mnt/lustre/d200.pools/dir_tst/subdir Testing relative path works well Setting pool on directory dir_tst Setting pool on directory ./dir_tst Setting pool on directory ../dir_tst Setting pool on directory ../dir_tst/dir_tst Checking files allocation from directory pool Creating files in pool Checking 'lfs df' output Creating files in a pool with relative pathname Removing first target from a pool Removing lustre-OST0000_UUID from cea1 oleg146-server: OST lustre-OST0000_UUID removed from pool lustre.cea1 pdsh@oleg146-client: oleg146-server: ssh exited with exit code 1 pdsh@oleg146-client: oleg146-server: ssh exited with exit code 1 Removing all targets from pool Destroying pool oleg146-server: Pool lustre.cea1 destroyed PASS 200 (19s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 204a: Print default stripe attributes ===== 22:58:04 (1713409084) striped dir -i0 -c2 -H fnv_1a_64 /mnt/lustre/d204a.sanity PASS 204a (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 204b: Print default stripe size and offset ========================================================== 22:58:09 (1713409089) striped dir -i0 -c2 -H crush /mnt/lustre/d204b.sanity PASS 204b (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 204c: Print default stripe count and offset ========================================================== 22:58:14 (1713409094) striped dir -i0 -c2 -H all_char /mnt/lustre/d204c.sanity PASS 204c (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 204d: Print default stripe count and size ========================================================== 22:58:19 (1713409099) striped dir -i0 -c2 -H crush2 /mnt/lustre/d204d.sanity PASS 204d (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 204e: Print raw stripe attributes ========= 22:58:24 (1713409104) striped dir -i0 -c2 -H crush2 /mnt/lustre/d204e.sanity PASS 204e (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 204f: Print raw stripe size and offset ==== 22:58:29 (1713409109) striped dir -i0 -c2 -H all_char /mnt/lustre/d204f.sanity PASS 204f (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 204g: Print raw stripe count and offset === 22:58:33 (1713409113) striped dir -i0 -c2 -H fnv_1a_64 /mnt/lustre/d204g.sanity PASS 204g (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 204h: Print raw stripe count and size ===== 22:58:38 (1713409118) striped dir -i0 -c2 -H fnv_1a_64 /mnt/lustre/d204h.sanity PASS 204h (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 205a: Verify job stats ==================== 22:58:43 (1713409123) Setting lustre.sys.jobid_var from procname_uid to nodelocal Waiting 90s for 'nodelocal' Updated after 2s: want 'nodelocal' got 'nodelocal' mdd.lustre-MDT0000.changelog_mask=+hsm mdd.lustre-MDT0001.changelog_mask=+hsm Registered 2 changelog users: 'cl28 cl8' mdt.lustre-MDT0000.job_cleanup_interval=5 mdt.lustre-MDT0001.job_cleanup_interval=5 jobid_name=id.205a.%e.23354 Test: /home/green/git/lustre-release/lustre/utils/lfs mkdir -i 0 -c 1 /mnt/lustre/d205a.sanity Using JobID environment nodelocal=id.205a.lfs.23354 jobid_name=id.205a.%e.23475 Test: rmdir /mnt/lustre/d205a.sanity Using JobID environment nodelocal=id.205a.rmdir.23475 jobid_name=id.205a.%e.26763 Test: lfs mkdir -i 1 /mnt/lustre/d205a.sanity.remote Using JobID environment nodelocal=id.205a.lfs.26763 jobid_name=id.205a.%e.10929 Test: mknod /mnt/lustre/f205a.sanity c 1 3 Using JobID environment nodelocal=id.205a.mknod.10929 jobid_name=id.205a.%e.23221 Test: rm -f /mnt/lustre/f205a.sanity Using JobID environment nodelocal=id.205a.rm.23221 jobid_name=id.205a.%e.16781 Test: /home/green/git/lustre-release/lustre/utils/lfs setstripe -i 0 -c 1 /mnt/lustre/f205a.sanity Using JobID environment nodelocal=id.205a.lfs.16781 jobid_name=id.205a.%e.20032 Test: touch /mnt/lustre/f205a.sanity Using JobID environment nodelocal=id.205a.touch.20032 jobid_name=id.205a.%e.25124 Test: dd if=/dev/zero of=/mnt/lustre/f205a.sanity bs=1M count=1 oflag=sync Using JobID environment nodelocal=id.205a.dd.25124 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.100447 s, 10.4 MB/s jobid_name=id.205a.%e.13487 Test: dd if=/mnt/lustre/f205a.sanity of=/dev/null bs=1M count=1 iflag=direct Using JobID environment nodelocal=id.205a.dd.13487 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0423564 s, 24.8 MB/s jobid_name=id.205a.%e.21452 Test: /home/green/git/lustre-release/lustre/tests/truncate /mnt/lustre/f205a.sanity 0 Using JobID environment nodelocal=id.205a.truncate.21452 jobid_name=id.205a.%e.8036 Test: mv -f /mnt/lustre/f205a.sanity /mnt/lustre/d205a.sanity.rename Using JobID environment nodelocal=id.205a.mv.8036 jobid_name=id.205a.%e.5861 Test: /home/green/git/lustre-release/lustre/utils/lfs mkdir -i 0 -c 1 /mnt/lustre/d205a.sanity.expire Using JobID environment nodelocal=id.205a.lfs.5861 lustre-MDT0000.500097796 12LYOUT 02:58:58.472876835 2024.04.18 0x0 t=[0x200002b11:0x69b9:0x0] j=id.205a.lfs.16781 ef=0xf u=0:0 nid=192.168.201.46@tcp p=[0x200000007:0x1:0x0] lustre-MDT0000.500097797 11CLOSE 02:58:58.477069637 2024.04.18 0x2 t=[0x200002b11:0x69b9:0x0] j=id.205a.lfs.16781 ef=0xf u=0:0 nid=192.168.201.46@tcp lustre-MDT0000.500097798 11CLOSE 02:58:58.483436440 2024.04.18 0x42 t=[0x200002b11:0x69b9:0x0] j=id.205a.lfs.16781 ef=0xf u=0:0 nid=192.168.201.46@tcp lustre-MDT0000.500097799 11CLOSE 02:59:00.129304015 2024.04.18 0x42 t=[0x200002b11:0x69b9:0x0] j=id.205a.touch.20032 ef=0xf u=0:0 nid=192.168.201.46@tcp lustre-MDT0000.500097800 13TRUNC 02:59:02.119812341 2024.04.18 0xe t=[0x200002b11:0x69b9:0x0] j=id.205a.dd.25124 ef=0xf u=0:0 nid=192.168.201.46@tcp p=[0x200000007:0x1:0x0] lustre-MDT0000.500097801 11CLOSE 02:59:02.225560452 2024.04.18 0x242 t=[0x200002b11:0x69b9:0x0] j=id.205a.dd.25124 ef=0xf u=0:0 nid=192.168.201.46@tcp lustre-MDT0000.500097802 13TRUNC 02:59:05.562650838 2024.04.18 0xe t=[0x200002b11:0x69b9:0x0] j=id.205a.truncate.21452 ef=0xf u=0:0 nid=192.168.201.46@tcp p=[0x200000007:0x1:0x0] lustre-MDT0000.500097803 08RENME 02:59:07.551409353 2024.04.18 0x0 t=[0:0x0:0x0] j=id.205a.mv.8036 ef=0xf u=0:0 nid=192.168.201.46@tcp p=[0x200000007:0x1:0x0] d205a.sanity.rename s=[0x200002b11:0x69b9:0x0] sp=[0x200000007:0x1:0x0] f205a.sanity lustre-MDT0000.500097804 02MKDIR 02:59:09.220447272 2024.04.18 0x0 t=[0x200002b11:0x69bc:0x0] j=id.205a.lfs.5861 ef=0xf u=0:0 nid=192.168.201.46@tcp p=[0x200000007:0x1:0x0] d205a.sanity.expire lustre-MDT0001.500000049 02MKDIR 02:58:53.566884355 2024.04.18 0x0 t=[0x240001b74:0x1b83:0x0] j=id.205a.lfs.26763 ef=0xf u=0:0 nid=192.168.201.46@tcp p=[0x200000007:0x1:0x0] d205a.sanity.remote Setting lustre.sys.jobid_var from nodelocal to disable Waiting 90s for 'disable' Updated after 2s: want 'disable' got 'disable' lustre-MDT0000.500097793 05MKNOD 02:58:55.190069505 2024.04.18 0x0 t=[0x200002b11:0x69b8:0x0] j=id.205a.mknod.10929 ef=0xf u=0:0 nid=192.168.201.46@tcp p=[0x200000007:0x1:0x0] f205a.sanity lustre-MDT0000.500097794 06UNLNK 02:58:56.813003117 2024.04.18 0x1 t=[0x200002b11:0x69b8:0x0] j=id.205a.rm.23221 ef=0xf u=0:0 nid=192.168.201.46@tcp p=[0x200000007:0x1:0x0] f205a.sanity lustre-MDT0000.500097795 01CREAT 02:58:58.463473315 2024.04.18 0x0 t=[0x200002b11:0x69b9:0x0] j=id.205a.lfs.16781 ef=0xf u=0:0 nid=192.168.201.46@tcp p=[0x200000007:0x1:0x0] f205a.sanity lustre-MDT0000.500097803 08RENME 02:59:07.551409353 2024.04.18 0x0 t=[0:0x0:0x0] j=id.205a.mv.8036 ef=0xf u=0:0 nid=192.168.201.46@tcp p=[0x200000007:0x1:0x0] d205a.sanity.rename s=[0x200002b11:0x69b9:0x0] sp=[0x200000007:0x1:0x0] f205a.sanity lustre-MDT0000.500097805 01CREAT 02:59:12.705982363 2024.04.18 0x0 t=[0x200002b11:0x69bd:0x0] ef=0xf u=0:0 nid=192.168.201.46@tcp p=[0x200000007:0x1:0x0] f205a.sanity jobid_var=USER jobid_name=S.%j.%e.%u.%h.E Test: touch /mnt/lustre/f205a.sanity Using JobID environment USER=S.root.touch.0.oleg146-client.v jobid_var=USER jobid_name=S.%j.%e.%u.%H.E Test: touch /mnt/lustre/f205a.sanity Using JobID environment USER=S.root.touch.0.oleg146-client.E jobid_var=session jobid_name=S.%j.%e.%u.%h.E jobid_this_session=root Test: touch /mnt/lustre/f205a.sanity Using JobID environment session=S.root.touch.0.oleg146-client.v mdt.lustre-MDT0000.job_cleanup_interval=600 mdt.lustre-MDT0001.job_cleanup_interval=600 jobid_name=%e.%u lustre-MDT0001: clear the changelog for cl28 of all records lustre-MDT0001: Deregistered changelog user #28 lustre-MDT0000: clear the changelog for cl8 of all records lustre-MDT0000: Deregistered changelog user #8 Setting lustre.sys.jobid_var from session to procname_uid Waiting 90s for 'procname_uid' Updated after 3s: want 'procname_uid' got 'procname_uid' PASS 205a (42s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 205b: Verify job stats jobid and output format ========================================================== 22:59:27 (1713409167) mdt.lustre-MDT0000.job_stats=clear mdt.lustre-MDT0001.job_stats=clear jobid_var=USER jobid_name=%j.%e.%u open: { samples: 1, unit: usecs, min: 1973, max: 1973, sum: 1973, sumsq: 3892729 } jobid_var=TEST205b mdt.lustre-MDT0000.job_stats="has\x20sp.touch.0" jobid_name=%e.%u jobid_var=procname_uid PASS 205b (5s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 205c: Verify client stats format ========== 22:59:34 (1713409174) llite.lustre-ffff8800a7bbc800.stats=0 1+0 records in 1+0 records out 4096 bytes (4.1 kB) copied, 0.00757666 s, 541 kB/s llite.lustre-ffff8800a7bbc800.stats= snapshot_time 1713409175.096412865 secs.nsecs start_time 1713409175.072420212 secs.nsecs elapsed_time 0.023992653 secs.nsecs write_bytes 1 samples [bytes] 4096 4096 4096 16777216 write 1 samples [usecs] 3924 3924 3924 15397776 open 1 samples [usecs] 78 78 78 6084 close 1 samples [usecs] 3107 3107 3107 9653449 mknod 1 samples [usecs] 6050 6050 6050 36602500 inode_permission 3 samples [usecs] 5 279 452 106090 opencount 1 samples [reqs] 1 1 1 1 write_bytes 1 samples [bytes] 4096 4096 4096 16777216 PASS 205c (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 205d: verify the format of some stats files ========================================================== 22:59:39 (1713409179) striped dir -i0 -c2 -H crush2 /mnt/lustre/d205d.sanity 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 0.331065 s, 31.7 MB/s rename_stats: - snapshot_time: 1713409180.568938251 - start_time: 1713408359.525897376 - elapsed_time: 821.043040875 - same_dir: 4KB: { sample: 1, pct: 50, cum_pct: 50 } 8KB: { sample: 0, pct: 0, cum_pct: 50 } 16KB: { sample: 0, pct: 0, cum_pct: 50 } 32KB: { sample: 0, pct: 0, cum_pct: 50 } 64KB: { sample: 0, pct: 0, cum_pct: 50 } 128KB: { sample: 0, pct: 0, cum_pct: 50 } 256KB: { sample: 1, pct: 50, cum_pct: 100 } - crossdir_src: 4KB: { sample: 2, pct: 100, cum_pct: 100 } - crossdir_tgt: 4KB: { sample: 2, pct: 100, cum_pct: 100 } verify rename_stats... OK verify mdt job_stats... OK verify ost job_stats... OK PASS 205d (5s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 205e: verify the output of lljobstat ====== 22:59:46 (1713409186) jobid_var=nodelocal jobid_name=205e.%e.%u 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 0.73419 s, 14.3 MB/s mdt.lustre-MDT0000.job_stats= job_stats: - job_id: .touch.0 snapshot_time: 1713409168.520388610 secs.nsecs start_time: 1713409168.495297719 secs.nsecs elapsed_time: 0.025090891 secs.nsecs open: { samples: 1, unit: usecs, min: 1973, max: 1973, sum: 1973, sumsq: 3892729 } close: { samples: 1, unit: usecs, min: 457, max: 457, sum: 457, sumsq: 208849 } mknod: { samples: 1, unit: usecs, min: 1452, max: 1452, sum: 1452, sumsq: 2108304 } link: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } unlink: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } mkdir: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } rmdir: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } rename: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getattr: { samples: 1, unit: usecs, min: 100, max: 100, sum: 100, sumsq: 10000 } setattr: { samples: 1, unit: usecs, min: 639, max: 639, sum: 639, sumsq: 408321 } getxattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setxattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } samedir_rename: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } parallel_rename_file: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } parallel_rename_dir: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } crossdir_rename: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } read: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } write: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } read_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } write_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } punch: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } migrate: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } fallocate: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: root.lfs.0 snapshot_time: 1713409169.215354546 secs.nsecs start_time: 1713409169.204357531 secs.nsecs elapsed_time: 0.010997015 secs.nsecs open: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } close: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } mknod: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } link: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } unlink: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } mkdir: { samples: 1, unit: usecs, min: 1980, max: 1980, sum: 1980, sumsq: 3920400 } rmdir: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } rename: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getattr: { samples: 1, unit: usecs, min: 99, max: 99, sum: 99, sumsq: 9801 } setattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getxattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setxattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } samedir_rename: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } parallel_rename_file: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } parallel_rename_dir: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } crossdir_rename: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } read: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } write: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } read_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } write_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } punch: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } migrate: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } fallocate: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: dd.0 snapshot_time: 1713409174.753470426 secs.nsecs start_time: 1713409174.744421872 secs.nsecs elapsed_time: 0.009048554 secs.nsecs open: { samples: 1, unit: usecs, min: 2023, max: 2023, sum: 2023, sumsq: 4092529 } close: { samples: 1, unit: usecs, min: 479, max: 479, sum: 479, sumsq: 229441 } mknod: { samples: 1, unit: usecs, min: 1477, max: 1477, sum: 1477, sumsq: 2181529 } link: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } unlink: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } mkdir: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } rmdir: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } rename: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getxattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setxattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } samedir_rename: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } parallel_rename_file: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } parallel_rename_dir: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } crossdir_rename: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } read: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } write: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } read_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } write_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } punch: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } migrate: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } fallocate: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: stat.0 snapshot_time: 1713409179.779685803 secs.nsecs start_time: 1713409179.779669536 secs.nsecs elapsed_time: 0.000016267 secs.nsecs open: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } close: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } mknod: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } link: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } unlink: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } mkdir: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } rmdir: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } rename: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getxattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setxattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 1, unit: usecs, min: 93, max: 93, sum: 93, sumsq: 8649 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } samedir_rename: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } parallel_rename_file: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } parallel_rename_dir: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } crossdir_rename: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } read: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } write: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } read_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } write_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } punch: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } migrate: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } fallocate: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: lfs.0 snapshot_time: 1713409179.846941731 secs.nsecs start_time: 1713409179.798723317 secs.nsecs elapsed_time: 0.048218414 secs.nsecs open: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } close: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } mknod: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } link: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } unlink: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } mkdir: { samples: 1, unit: usecs, min: 8191, max: 8191, sum: 8191, sumsq: 67092481 } rmdir: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } rename: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getattr: { samples: 5, unit: usecs, min: 78, max: 210, sum: 719, sumsq: 117573 } setattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getxattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setxattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } samedir_rename: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } parallel_rename_file: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } parallel_rename_dir: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } crossdir_rename: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } read: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } write: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } read_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } write_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } punch: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } migrate: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } fallocate: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: rm.0 snapshot_time: 1713409184.045840863 secs.nsecs start_time: 1713409183.930608326 secs.nsecs elapsed_time: 0.115232537 secs.nsecs open: { samples: 2, unit: usecs, min: 367, max: 477, sum: 844, sumsq: 362218 } close: { samples: 2, unit: usecs, min: 227, max: 296, sum: 523, sumsq: 139145 } mknod: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } link: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } unlink: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } mkdir: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } rmdir: { samples: 1, unit: usecs, min: 13364, max: 13364, sum: 13364, sumsq: 178596496 } rename: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getxattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setxattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } samedir_rename: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } parallel_rename_file: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } parallel_rename_dir: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } crossdir_rename: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } read: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } write: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } read_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } write_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } punch: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } migrate: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } fallocate: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: 205e.lfs.0 snapshot_time: 1713409187.203190151 secs.nsecs start_time: 1713409187.137167370 secs.nsecs elapsed_time: 0.066022781 secs.nsecs open: { samples: 2, unit: usecs, min: 1526, max: 4607, sum: 6133, sumsq: 23553125 } close: { samples: 2, unit: usecs, min: 315, max: 476, sum: 791, sumsq: 325801 } mknod: { samples: 1, unit: usecs, min: 1017, max: 1017, sum: 1017, sumsq: 1034289 } link: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } unlink: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } mkdir: { samples: 1, unit: usecs, min: 1958, max: 1958, sum: 1958, sumsq: 3833764 } rmdir: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } rename: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getattr: { samples: 4, unit: usecs, min: 76, max: 141, sum: 370, sumsq: 37362 } setattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getxattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setxattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } samedir_rename: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } parallel_rename_file: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } parallel_rename_dir: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } crossdir_rename: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } read: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } write: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } read_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } write_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } punch: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } migrate: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } fallocate: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: 205e.dd.0 snapshot_time: 1713409187.963158631 secs.nsecs start_time: 1713409187.211350518 secs.nsecs elapsed_time: 0.751808113 secs.nsecs open: { samples: 1, unit: usecs, min: 1041, max: 1041, sum: 1041, sumsq: 1083681 } close: { samples: 1, unit: usecs, min: 694, max: 694, sum: 694, sumsq: 481636 } mknod: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } link: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } unlink: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } mkdir: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } rmdir: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } rename: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setattr: { samples: 1, unit: usecs, min: 911, max: 911, sum: 911, sumsq: 829921 } getxattr: { samples: 1, unit: usecs, min: 163, max: 163, sum: 163, sumsq: 26569 } setxattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } sync: { samples: 1, unit: usecs, min: 1754, max: 1754, sum: 1754, sumsq: 3076516 } samedir_rename: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } parallel_rename_file: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } parallel_rename_dir: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } crossdir_rename: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } read: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } write: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } read_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } write_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } punch: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } migrate: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } fallocate: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } mdt.lustre-MDT0001.job_stats= job_stats: - job_id: lfs.0 snapshot_time: 1713409179.869196233 secs.nsecs start_time: 1713409179.838827800 secs.nsecs elapsed_time: 0.030368433 secs.nsecs open: { samples: 2, unit: usecs, min: 2646, max: 4578, sum: 7224, sumsq: 27959400 } close: { samples: 2, unit: usecs, min: 349, max: 386, sum: 735, sumsq: 270797 } mknod: { samples: 1, unit: usecs, min: 2092, max: 2092, sum: 2092, sumsq: 4376464 } link: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } unlink: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } mkdir: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } rmdir: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } rename: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getattr: { samples: 1, unit: usecs, min: 101, max: 101, sum: 101, sumsq: 10201 } setattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getxattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setxattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } samedir_rename: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } parallel_rename_file: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } parallel_rename_dir: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } crossdir_rename: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } read: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } write: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } read_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } write_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } punch: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } migrate: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } fallocate: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: dd.0 snapshot_time: 1713409180.228854102 secs.nsecs start_time: 1713409179.883979203 secs.nsecs elapsed_time: 0.344874899 secs.nsecs open: { samples: 1, unit: usecs, min: 1103, max: 1103, sum: 1103, sumsq: 1216609 } close: { samples: 1, unit: usecs, min: 453, max: 453, sum: 453, sumsq: 205209 } mknod: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } link: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } unlink: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } mkdir: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } rmdir: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } rename: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setattr: { samples: 1, unit: usecs, min: 3890, max: 3890, sum: 3890, sumsq: 15132100 } getxattr: { samples: 1, unit: usecs, min: 151, max: 151, sum: 151, sumsq: 22801 } setxattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } samedir_rename: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } parallel_rename_file: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } parallel_rename_dir: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } crossdir_rename: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } read: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } write: { samples: 1, unit: usecs, min: 8531, max: 8531, sum: 8531, sumsq: 72777961 } read_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } write_bytes: { samples: 1, unit: bytes, min: 1048576, max: 1048576, sum: 1048576, sumsq: 1099511627776, hist: { 1M: 1 } } punch: { samples: 1, unit: usecs, min: 213, max: 213, sum: 213, sumsq: 45369 } migrate: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } fallocate: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: mv.0 snapshot_time: 1713409180.255758106 secs.nsecs start_time: 1713409180.242612855 secs.nsecs elapsed_time: 0.013145251 secs.nsecs open: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } close: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } mknod: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } link: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } unlink: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } mkdir: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } rmdir: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } rename: { samples: 1, unit: usecs, min: 4141, max: 4141, sum: 4141, sumsq: 17147881 } getattr: { samples: 1, unit: usecs, min: 97, max: 97, sum: 97, sumsq: 9409 } setattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getxattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setxattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } samedir_rename: { samples: 1, unit: usecs, min: 4141, max: 4141, sum: 4141, sumsq: 17147881 } parallel_rename_file: { samples: 1, unit: usecs, min: 4141, max: 4141, sum: 4141, sumsq: 17147881 } parallel_rename_dir: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } crossdir_rename: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } read: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } write: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } read_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } write_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } punch: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } migrate: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } fallocate: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: rm.0 snapshot_time: 1713409183.964557775 secs.nsecs start_time: 1713409183.927176107 secs.nsecs elapsed_time: 0.037381668 secs.nsecs open: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } close: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } mknod: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } link: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } unlink: { samples: 1, unit: usecs, min: 4091, max: 4091, sum: 4091, sumsq: 16736281 } mkdir: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } rmdir: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } rename: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getattr: { samples: 2, unit: usecs, min: 81, max: 102, sum: 183, sumsq: 16965 } setattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getxattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setxattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } samedir_rename: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } parallel_rename_file: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } parallel_rename_dir: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } crossdir_rename: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } read: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } write: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } read_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } write_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } punch: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } migrate: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } fallocate: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } obdfilter.lustre-OST0000.job_stats= job_stats: - job_id: touch.0 snapshot_time: 1713409079.906957927 secs.nsecs start_time: 1713408700.464233054 secs.nsecs elapsed_time: 379.442724873 secs.nsecs read_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } write_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } read: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } write: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setattr: { samples: 11, unit: usecs, min: 86, max: 1634, sum: 3016, sumsq: 2865462 } punch: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } destroy: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } create: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } get_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } set_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } quotactl: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } prealloc: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: cp.0 snapshot_time: 1713409015.680585846 secs.nsecs start_time: 1713408999.185355874 secs.nsecs elapsed_time: 16.495229972 secs.nsecs read_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } write_bytes: { samples: 2, unit: bytes, min: 771, max: 1716, sum: 2487, sumsq: 3539097, hist: { 1K: 1, 2K: 1 } } read: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } write: { samples: 2, unit: usecs, min: 129, max: 143, sum: 272, sumsq: 37090 } getattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } punch: { samples: 2, unit: usecs, min: 172, max: 2028, sum: 2200, sumsq: 4142368 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } destroy: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } create: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } get_info: { samples: 1, unit: usecs, min: 75, max: 75, sum: 75, sumsq: 5625 } set_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } quotactl: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } prealloc: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: lfs.0 snapshot_time: 1713409071.049849805 secs.nsecs start_time: 1713408999.407754527 secs.nsecs elapsed_time: 71.642095278 secs.nsecs read_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } write_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } read: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } write: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getattr: { samples: 2, unit: usecs, min: 20613, max: 86826, sum: 107439, sumsq: 7963650045 } setattr: { samples: 5, unit: usecs, min: 131, max: 171, sum: 736, sumsq: 109640 } punch: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } destroy: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } create: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 1, unit: usecs, min: 38, max: 38, sum: 38, sumsq: 1444 } get_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } set_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } quotactl: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } prealloc: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: cmp.0 snapshot_time: 1713409019.658113194 secs.nsecs start_time: 1713408999.471347490 secs.nsecs elapsed_time: 20.186765704 secs.nsecs read_bytes: { samples: 3, unit: bytes, min: 4096, max: 1572864, sum: 1581056, sumsq: 2473934716928, hist: { 4K: 2, 2M: 1 } } write_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } read: { samples: 3, unit: usecs, min: 68, max: 5910, sum: 6073, sumsq: 34941749 } write: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } punch: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } destroy: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } create: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } get_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } set_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } quotactl: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } prealloc: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: cat.0 snapshot_time: 1713409000.116620542 secs.nsecs start_time: 1713409000.094279822 secs.nsecs elapsed_time: 0.022340720 secs.nsecs read_bytes: { samples: 2, unit: bytes, min: 4096, max: 4096, sum: 8192, sumsq: 33554432, hist: { 4K: 2 } } write_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } read: { samples: 2, unit: usecs, min: 72, max: 112, sum: 184, sumsq: 17728 } write: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } punch: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } destroy: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } create: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } get_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } set_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } quotactl: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } prealloc: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: chown.0 snapshot_time: 1713409005.546414820 secs.nsecs start_time: 1713409005.546372310 secs.nsecs elapsed_time: 0.000042510 secs.nsecs read_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } write_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } read: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } write: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setattr: { samples: 1, unit: usecs, min: 150, max: 150, sum: 150, sumsq: 22500 } punch: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } destroy: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } create: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } get_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } set_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } quotactl: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } prealloc: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: dd.0 snapshot_time: 1713409054.037091792 secs.nsecs start_time: 1713409014.575810802 secs.nsecs elapsed_time: 39.461280990 secs.nsecs read_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } write_bytes: { samples: 15, unit: bytes, min: 1048576, max: 4194304, sum: 51904512, sumsq: 202585017417728, hist: { 1M: 2, 2M: 2, 4M: 11 } } read: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } write: { samples: 15, unit: usecs, min: 4492, max: 17012, sum: 149704, sumsq: 1617294336 } getattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } punch: { samples: 1, unit: usecs, min: 166, max: 166, sum: 166, sumsq: 27556 } sync: { samples: 2, unit: usecs, min: 1325, max: 1446, sum: 2771, sumsq: 3846541 } destroy: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } create: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } get_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } set_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } quotactl: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } prealloc: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: bash.0 snapshot_time: 1713409033.084800173 secs.nsecs start_time: 1713409033.058644798 secs.nsecs elapsed_time: 0.026155375 secs.nsecs read_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } write_bytes: { samples: 1, unit: bytes, min: 4, max: 4, sum: 4, sumsq: 16, hist: { 4: 1 } } read: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } write: { samples: 1, unit: usecs, min: 147, max: 147, sum: 147, sumsq: 21609 } getattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } punch: { samples: 1, unit: usecs, min: 175, max: 175, sum: 175, sumsq: 30625 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } destroy: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } create: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } get_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } set_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } quotactl: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } prealloc: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: id.205a.touch.20032 snapshot_time: 1713409140.126324601 secs.nsecs start_time: 1713409140.126310687 secs.nsecs elapsed_time: 0.000013914 secs.nsecs read_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } write_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } read: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } write: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setattr: { samples: 1, unit: usecs, min: 193, max: 193, sum: 193, sumsq: 37249 } punch: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } destroy: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } create: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } get_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } set_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } quotactl: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } prealloc: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: id.205a.dd.25124 snapshot_time: 1713409142.221761753 secs.nsecs start_time: 1713409142.125630980 secs.nsecs elapsed_time: 0.096130773 secs.nsecs read_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } write_bytes: { samples: 1, unit: bytes, min: 1048576, max: 1048576, sum: 1048576, sumsq: 1099511627776, hist: { 1M: 1 } } read: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } write: { samples: 1, unit: usecs, min: 6195, max: 6195, sum: 6195, sumsq: 38378025 } getattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } punch: { samples: 1, unit: usecs, min: 169, max: 169, sum: 169, sumsq: 28561 } sync: { samples: 1, unit: usecs, min: 1439, max: 1439, sum: 1439, sumsq: 2070721 } destroy: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } create: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } get_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } set_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } quotactl: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } prealloc: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: id.205a.dd.13487 snapshot_time: 1713409143.934320243 secs.nsecs start_time: 1713409143.934297293 secs.nsecs elapsed_time: 0.000022950 secs.nsecs read_bytes: { samples: 1, unit: bytes, min: 1048576, max: 1048576, sum: 1048576, sumsq: 1099511627776, hist: { 1M: 1 } } write_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } read: { samples: 1, unit: usecs, min: 6182, max: 6182, sum: 6182, sumsq: 38217124 } write: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } punch: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } destroy: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } create: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } get_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } set_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } quotactl: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } prealloc: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: id.205a.truncate.21452 snapshot_time: 1713409145.573920197 secs.nsecs start_time: 1713409145.573908397 secs.nsecs elapsed_time: 0.000011800 secs.nsecs read_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } write_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } read: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } write: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } punch: { samples: 1, unit: usecs, min: 1365, max: 1365, sum: 1365, sumsq: 1863225 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } destroy: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } create: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } get_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } set_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } quotactl: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } prealloc: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: S.root.touch.0.oleg146-client.v snapshot_time: 1713409157.595835277 secs.nsecs start_time: 1713409154.257418849 secs.nsecs elapsed_time: 3.338416428 secs.nsecs read_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } write_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } read: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } write: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setattr: { samples: 2, unit: usecs, min: 140, max: 142, sum: 282, sumsq: 39764 } punch: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } destroy: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } create: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } get_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } set_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } quotactl: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } prealloc: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: S.root.touch.0.oleg146-client.E snapshot_time: 1713409155.919783538 secs.nsecs start_time: 1713409155.919767615 secs.nsecs elapsed_time: 0.000015923 secs.nsecs read_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } write_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } read: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } write: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setattr: { samples: 1, unit: usecs, min: 142, max: 142, sum: 142, sumsq: 20164 } punch: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } destroy: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } create: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } get_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } set_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } quotactl: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } prealloc: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: "has\x20sp.touch.0" snapshot_time: 1713409169.251214304 secs.nsecs start_time: 1713409169.251201278 secs.nsecs elapsed_time: 0.000013026 secs.nsecs read_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } write_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } read: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } write: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setattr: { samples: 1, unit: usecs, min: 195, max: 195, sum: 195, sumsq: 38025 } punch: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } destroy: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } create: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } get_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } set_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } quotactl: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } prealloc: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: 205e.dd.0 snapshot_time: 1713409187.959988526 secs.nsecs start_time: 1713409187.228793342 secs.nsecs elapsed_time: 0.731195184 secs.nsecs read_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } write_bytes: { samples: 10, unit: bytes, min: 1048576, max: 1048576, sum: 10485760, sumsq: 10995116277760, hist: { 1M: 10 } } read: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } write: { samples: 10, unit: usecs, min: 5295, max: 7575, sum: 58805, sumsq: 349614033 } getattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } punch: { samples: 1, unit: usecs, min: 193, max: 193, sum: 193, sumsq: 37249 } sync: { samples: 10, unit: usecs, min: 926, max: 1348, sum: 12151, sumsq: 14887865 } destroy: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } create: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } get_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } set_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } quotactl: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } prealloc: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } obdfilter.lustre-OST0001.job_stats= job_stats: - job_id: lfs.0 snapshot_time: 1713409071.052915806 secs.nsecs start_time: 1713406266.070680060 secs.nsecs elapsed_time: 2804.982235746 secs.nsecs read_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } write_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } read: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } write: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setattr: { samples: 3, unit: usecs, min: 129, max: 236, sum: 499, sumsq: 90293 } punch: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } destroy: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } create: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 5, unit: usecs, min: 30, max: 36, sum: 162, sumsq: 5274 } get_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } set_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } quotactl: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } prealloc: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: cp.0 snapshot_time: 1713409016.766604506 secs.nsecs start_time: 1713406667.642436403 secs.nsecs elapsed_time: 2349.124168103 secs.nsecs read_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } write_bytes: { samples: 143, unit: bytes, min: 159, max: 4194304, sum: 575759774, sumsq: 2411235321407362, hist: { 256: 2, 8K: 2, 128K: 1, 1M: 1, 4M: 137 } } read: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } write: { samples: 143, unit: usecs, min: 149, max: 19934, sum: 1553677, sumsq: 18795505203 } getattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } punch: { samples: 5, unit: usecs, min: 125, max: 250, sum: 899, sumsq: 173331 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } destroy: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } create: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } get_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } set_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } quotactl: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } prealloc: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: touch.0 snapshot_time: 1713409079.906825227 secs.nsecs start_time: 1713406680.742851819 secs.nsecs elapsed_time: 2399.163973408 secs.nsecs read_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } write_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } read: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } write: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setattr: { samples: 22, unit: usecs, min: 131, max: 4983, sum: 13874, sumsq: 33607784 } punch: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } destroy: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } create: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } get_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } set_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } quotactl: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } prealloc: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: dd.0 snapshot_time: 1713409180.218097965 secs.nsecs start_time: 1713406768.467358528 secs.nsecs elapsed_time: 2411.750739437 secs.nsecs read_bytes: { samples: 4, unit: bytes, min: 4096, max: 4194304, sum: 10485760, sumsq: 39565272285184, hist: { 4K: 1, 2M: 1, 4M: 2 } } write_bytes: { samples: 33, unit: bytes, min: 4096, max: 4194304, sum: 97566720, sumsq: 405075008684032, hist: { 4K: 1, 16K: 6, 32K: 2, 1M: 1, 4M: 23 } } read: { samples: 4, unit: usecs, min: 82, max: 10788, sum: 25372, sumsq: 225186670 } write: { samples: 33, unit: usecs, min: 82, max: 18009, sum: 251971, sumsq: 2727739153 } getattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } punch: { samples: 5, unit: usecs, min: 125, max: 364, sum: 1155, sumsq: 300379 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } destroy: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } create: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } get_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } set_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } quotactl: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } prealloc: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: multiop.0 snapshot_time: 1713409059.042846440 secs.nsecs start_time: 1713406817.055545711 secs.nsecs elapsed_time: 2241.987300729 secs.nsecs read_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } write_bytes: { samples: 6, unit: bytes, min: 1000, max: 4194304, sum: 10492856, sumsq: 39582440377152, hist: { 1K: 1, 2K: 1, 4K: 1, 2M: 1, 4M: 2 } } read: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } write: { samples: 6, unit: usecs, min: 148, max: 23410, sum: 60877, sumsq: 1244331363 } getattr: { samples: 2, unit: usecs, min: 26, max: 26, sum: 52, sumsq: 1352 } setattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } punch: { samples: 1, unit: usecs, min: 9958, max: 9958, sum: 9958, sumsq: 99161764 } sync: { samples: 3, unit: usecs, min: 1030, max: 1322, sum: 3474, sumsq: 4067468 } destroy: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } create: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } get_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } set_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } quotactl: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } prealloc: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: bash.0 snapshot_time: 1713409033.152121704 secs.nsecs start_time: 1713408999.566148516 secs.nsecs elapsed_time: 33.585973188 secs.nsecs read_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } write_bytes: { samples: 1, unit: bytes, min: 4, max: 4, sum: 4, sumsq: 16, hist: { 4: 1 } } read: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } write: { samples: 1, unit: usecs, min: 232, max: 232, sum: 232, sumsq: 53824 } getattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } punch: { samples: 1, unit: usecs, min: 220, max: 220, sum: 220, sumsq: 48400 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } destroy: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } create: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } get_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } set_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } quotactl: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } prealloc: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: cmp.0 snapshot_time: 1713409019.552152336 secs.nsecs start_time: 1713409019.552139133 secs.nsecs elapsed_time: 0.000013203 secs.nsecs read_bytes: { samples: 1, unit: bytes, min: 1572864, max: 1572864, sum: 1572864, sumsq: 2473901162496, hist: { 2M: 1 } } write_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } read: { samples: 1, unit: usecs, min: 7017, max: 7017, sum: 7017, sumsq: 49238289 } write: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } punch: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } destroy: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } create: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } get_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } set_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } quotactl: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } prealloc: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: .touch.0 snapshot_time: 1713409168.517271902 secs.nsecs start_time: 1713409168.517258125 secs.nsecs elapsed_time: 0.000013777 secs.nsecs read_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } write_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } read: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } write: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setattr: { samples: 1, unit: usecs, min: 1504, max: 1504, sum: 1504, sumsq: 2262016 } punch: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } destroy: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } create: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } get_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } set_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } quotactl: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } prealloc: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } mdt.lustre-MDT0000.job_stats= job_stats: - job_id: .touch.0 snapshot_time: 1713409168.520388610 secs.nsecs start_time: 1713409168.495297719 secs.nsecs elapsed_time: 0.025090891 secs.nsecs open: { samples: 1, unit: usecs, min: 1973, max: 1973, sum: 1973, sumsq: 3892729 } close: { samples: 1, unit: usecs, min: 457, max: 457, sum: 457, sumsq: 208849 } mknod: { samples: 1, unit: usecs, min: 1452, max: 1452, sum: 1452, sumsq: 2108304 } link: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } unlink: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } mkdir: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } rmdir: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } rename: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getattr: { samples: 1, unit: usecs, min: 100, max: 100, sum: 100, sumsq: 10000 } setattr: { samples: 1, unit: usecs, min: 639, max: 639, sum: 639, sumsq: 408321 } getxattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setxattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } samedir_rename: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } parallel_rename_file: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } parallel_rename_dir: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } crossdir_rename: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } read: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } write: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } read_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } write_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } punch: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } migrate: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } fallocate: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: root.lfs.0 snapshot_time: 1713409169.215354546 secs.nsecs start_time: 1713409169.204357531 secs.nsecs elapsed_time: 0.010997015 secs.nsecs open: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } close: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } mknod: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } link: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } unlink: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } mkdir: { samples: 1, unit: usecs, min: 1980, max: 1980, sum: 1980, sumsq: 3920400 } rmdir: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } rename: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getattr: { samples: 1, unit: usecs, min: 99, max: 99, sum: 99, sumsq: 9801 } setattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getxattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setxattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } samedir_rename: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } parallel_rename_file: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } parallel_rename_dir: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } crossdir_rename: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } read: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } write: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } read_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } write_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } punch: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } migrate: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } fallocate: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: dd.0 snapshot_time: 1713409174.753470426 secs.nsecs start_time: 1713409174.744421872 secs.nsecs elapsed_time: 0.009048554 secs.nsecs open: { samples: 1, unit: usecs, min: 2023, max: 2023, sum: 2023, sumsq: 4092529 } close: { samples: 1, unit: usecs, min: 479, max: 479, sum: 479, sumsq: 229441 } mknod: { samples: 1, unit: usecs, min: 1477, max: 1477, sum: 1477, sumsq: 2181529 } link: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } unlink: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } mkdir: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } rmdir: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } rename: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getxattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setxattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } samedir_rename: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } parallel_rename_file: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } parallel_rename_dir: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } crossdir_rename: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } read: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } write: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } read_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } write_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } punch: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } migrate: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } fallocate: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: stat.0 snapshot_time: 1713409179.779685803 secs.nsecs start_time: 1713409179.779669536 secs.nsecs elapsed_time: 0.000016267 secs.nsecs open: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } close: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } mknod: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } link: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } unlink: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } mkdir: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } rmdir: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } rename: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getxattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setxattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 1, unit: usecs, min: 93, max: 93, sum: 93, sumsq: 8649 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } samedir_rename: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } parallel_rename_file: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } parallel_rename_dir: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } crossdir_rename: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } read: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } write: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } read_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } write_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } punch: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } migrate: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } fallocate: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: lfs.0 snapshot_time: 1713409179.846941731 secs.nsecs start_time: 1713409179.798723317 secs.nsecs elapsed_time: 0.048218414 secs.nsecs open: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } close: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } mknod: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } link: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } unlink: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } mkdir: { samples: 1, unit: usecs, min: 8191, max: 8191, sum: 8191, sumsq: 67092481 } rmdir: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } rename: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getattr: { samples: 5, unit: usecs, min: 78, max: 210, sum: 719, sumsq: 117573 } setattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getxattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setxattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } samedir_rename: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } parallel_rename_file: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } parallel_rename_dir: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } crossdir_rename: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } read: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } write: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } read_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } write_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } punch: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } migrate: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } fallocate: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: rm.0 snapshot_time: 1713409184.045840863 secs.nsecs start_time: 1713409183.930608326 secs.nsecs elapsed_time: 0.115232537 secs.nsecs open: { samples: 2, unit: usecs, min: 367, max: 477, sum: 844, sumsq: 362218 } close: { samples: 2, unit: usecs, min: 227, max: 296, sum: 523, sumsq: 139145 } mknod: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } link: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } unlink: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } mkdir: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } rmdir: { samples: 1, unit: usecs, min: 13364, max: 13364, sum: 13364, sumsq: 178596496 } rename: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getxattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setxattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } samedir_rename: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } parallel_rename_file: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } parallel_rename_dir: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } crossdir_rename: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } read: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } write: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } read_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } write_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } punch: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } migrate: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } fallocate: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: 205e.lfs.0 snapshot_time: 1713409187.203190151 secs.nsecs start_time: 1713409187.137167370 secs.nsecs elapsed_time: 0.066022781 secs.nsecs open: { samples: 2, unit: usecs, min: 1526, max: 4607, sum: 6133, sumsq: 23553125 } close: { samples: 2, unit: usecs, min: 315, max: 476, sum: 791, sumsq: 325801 } mknod: { samples: 1, unit: usecs, min: 1017, max: 1017, sum: 1017, sumsq: 1034289 } link: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } unlink: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } mkdir: { samples: 1, unit: usecs, min: 1958, max: 1958, sum: 1958, sumsq: 3833764 } rmdir: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } rename: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getattr: { samples: 4, unit: usecs, min: 76, max: 141, sum: 370, sumsq: 37362 } setattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getxattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setxattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } samedir_rename: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } parallel_rename_file: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } parallel_rename_dir: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } crossdir_rename: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } read: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } write: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } read_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } write_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } punch: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } migrate: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } fallocate: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: 205e.dd.0 snapshot_time: 1713409187.963158631 secs.nsecs start_time: 1713409187.211350518 secs.nsecs elapsed_time: 0.751808113 secs.nsecs open: { samples: 1, unit: usecs, min: 1041, max: 1041, sum: 1041, sumsq: 1083681 } close: { samples: 1, unit: usecs, min: 694, max: 694, sum: 694, sumsq: 481636 } mknod: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } link: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } unlink: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } mkdir: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } rmdir: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } rename: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setattr: { samples: 1, unit: usecs, min: 911, max: 911, sum: 911, sumsq: 829921 } getxattr: { samples: 1, unit: usecs, min: 163, max: 163, sum: 163, sumsq: 26569 } setxattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } sync: { samples: 1, unit: usecs, min: 1754, max: 1754, sum: 1754, sumsq: 3076516 } samedir_rename: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } parallel_rename_file: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } parallel_rename_dir: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } crossdir_rename: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } read: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } write: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } read_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } write_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } punch: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } migrate: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } fallocate: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } mdt.lustre-MDT0001.job_stats= job_stats: - job_id: lfs.0 snapshot_time: 1713409179.869196233 secs.nsecs start_time: 1713409179.838827800 secs.nsecs elapsed_time: 0.030368433 secs.nsecs open: { samples: 2, unit: usecs, min: 2646, max: 4578, sum: 7224, sumsq: 27959400 } close: { samples: 2, unit: usecs, min: 349, max: 386, sum: 735, sumsq: 270797 } mknod: { samples: 1, unit: usecs, min: 2092, max: 2092, sum: 2092, sumsq: 4376464 } link: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } unlink: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } mkdir: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } rmdir: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } rename: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getattr: { samples: 1, unit: usecs, min: 101, max: 101, sum: 101, sumsq: 10201 } setattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getxattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setxattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } samedir_rename: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } parallel_rename_file: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } parallel_rename_dir: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } crossdir_rename: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } read: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } write: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } read_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } write_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } punch: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } migrate: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } fallocate: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: dd.0 snapshot_time: 1713409180.228854102 secs.nsecs start_time: 1713409179.883979203 secs.nsecs elapsed_time: 0.344874899 secs.nsecs open: { samples: 1, unit: usecs, min: 1103, max: 1103, sum: 1103, sumsq: 1216609 } close: { samples: 1, unit: usecs, min: 453, max: 453, sum: 453, sumsq: 205209 } mknod: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } link: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } unlink: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } mkdir: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } rmdir: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } rename: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setattr: { samples: 1, unit: usecs, min: 3890, max: 3890, sum: 3890, sumsq: 15132100 } getxattr: { samples: 1, unit: usecs, min: 151, max: 151, sum: 151, sumsq: 22801 } setxattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } samedir_rename: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } parallel_rename_file: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } parallel_rename_dir: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } crossdir_rename: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } read: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } write: { samples: 1, unit: usecs, min: 8531, max: 8531, sum: 8531, sumsq: 72777961 } read_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } write_bytes: { samples: 1, unit: bytes, min: 1048576, max: 1048576, sum: 1048576, sumsq: 1099511627776, hist: { 1M: 1 } } punch: { samples: 1, unit: usecs, min: 213, max: 213, sum: 213, sumsq: 45369 } migrate: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } fallocate: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: mv.0 snapshot_time: 1713409180.255758106 secs.nsecs start_time: 1713409180.242612855 secs.nsecs elapsed_time: 0.013145251 secs.nsecs open: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } close: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } mknod: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } link: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } unlink: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } mkdir: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } rmdir: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } rename: { samples: 1, unit: usecs, min: 4141, max: 4141, sum: 4141, sumsq: 17147881 } getattr: { samples: 1, unit: usecs, min: 97, max: 97, sum: 97, sumsq: 9409 } setattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getxattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setxattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } samedir_rename: { samples: 1, unit: usecs, min: 4141, max: 4141, sum: 4141, sumsq: 17147881 } parallel_rename_file: { samples: 1, unit: usecs, min: 4141, max: 4141, sum: 4141, sumsq: 17147881 } parallel_rename_dir: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } crossdir_rename: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } read: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } write: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } read_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } write_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } punch: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } migrate: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } fallocate: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: rm.0 snapshot_time: 1713409183.964557775 secs.nsecs start_time: 1713409183.927176107 secs.nsecs elapsed_time: 0.037381668 secs.nsecs open: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } close: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } mknod: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } link: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } unlink: { samples: 1, unit: usecs, min: 4091, max: 4091, sum: 4091, sumsq: 16736281 } mkdir: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } rmdir: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } rename: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getattr: { samples: 2, unit: usecs, min: 81, max: 102, sum: 183, sumsq: 16965 } setattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getxattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setxattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } samedir_rename: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } parallel_rename_file: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } parallel_rename_dir: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } crossdir_rename: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } read: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } write: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } read_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } write_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } punch: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } migrate: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } fallocate: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } obdfilter.lustre-OST0000.job_stats= job_stats: - job_id: touch.0 snapshot_time: 1713409079.906957927 secs.nsecs start_time: 1713408700.464233054 secs.nsecs elapsed_time: 379.442724873 secs.nsecs read_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } write_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } read: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } write: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setattr: { samples: 11, unit: usecs, min: 86, max: 1634, sum: 3016, sumsq: 2865462 } punch: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } destroy: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } create: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } get_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } set_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } quotactl: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } prealloc: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: cp.0 snapshot_time: 1713409015.680585846 secs.nsecs start_time: 1713408999.185355874 secs.nsecs elapsed_time: 16.495229972 secs.nsecs read_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } write_bytes: { samples: 2, unit: bytes, min: 771, max: 1716, sum: 2487, sumsq: 3539097, hist: { 1K: 1, 2K: 1 } } read: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } write: { samples: 2, unit: usecs, min: 129, max: 143, sum: 272, sumsq: 37090 } getattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } punch: { samples: 2, unit: usecs, min: 172, max: 2028, sum: 2200, sumsq: 4142368 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } destroy: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } create: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } get_info: { samples: 1, unit: usecs, min: 75, max: 75, sum: 75, sumsq: 5625 } set_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } quotactl: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } prealloc: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: lfs.0 snapshot_time: 1713409071.049849805 secs.nsecs start_time: 1713408999.407754527 secs.nsecs elapsed_time: 71.642095278 secs.nsecs read_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } write_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } read: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } write: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getattr: { samples: 2, unit: usecs, min: 20613, max: 86826, sum: 107439, sumsq: 7963650045 } setattr: { samples: 5, unit: usecs, min: 131, max: 171, sum: 736, sumsq: 109640 } punch: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } destroy: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } create: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 1, unit: usecs, min: 38, max: 38, sum: 38, sumsq: 1444 } get_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } set_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } quotactl: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } prealloc: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: cmp.0 snapshot_time: 1713409019.658113194 secs.nsecs start_time: 1713408999.471347490 secs.nsecs elapsed_time: 20.186765704 secs.nsecs read_bytes: { samples: 3, unit: bytes, min: 4096, max: 1572864, sum: 1581056, sumsq: 2473934716928, hist: { 4K: 2, 2M: 1 } } write_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } read: { samples: 3, unit: usecs, min: 68, max: 5910, sum: 6073, sumsq: 34941749 } write: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } punch: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } destroy: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } create: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } get_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } set_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } quotactl: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } prealloc: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: cat.0 snapshot_time: 1713409000.116620542 secs.nsecs start_time: 1713409000.094279822 secs.nsecs elapsed_time: 0.022340720 secs.nsecs read_bytes: { samples: 2, unit: bytes, min: 4096, max: 4096, sum: 8192, sumsq: 33554432, hist: { 4K: 2 } } write_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } read: { samples: 2, unit: usecs, min: 72, max: 112, sum: 184, sumsq: 17728 } write: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } punch: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } destroy: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } create: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } get_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } set_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } quotactl: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } prealloc: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: chown.0 snapshot_time: 1713409005.546414820 secs.nsecs start_time: 1713409005.546372310 secs.nsecs elapsed_time: 0.000042510 secs.nsecs read_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } write_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } read: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } write: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setattr: { samples: 1, unit: usecs, min: 150, max: 150, sum: 150, sumsq: 22500 } punch: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } destroy: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } create: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } get_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } set_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } quotactl: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } prealloc: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: dd.0 snapshot_time: 1713409054.037091792 secs.nsecs start_time: 1713409014.575810802 secs.nsecs elapsed_time: 39.461280990 secs.nsecs read_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } write_bytes: { samples: 15, unit: bytes, min: 1048576, max: 4194304, sum: 51904512, sumsq: 202585017417728, hist: { 1M: 2, 2M: 2, 4M: 11 } } read: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } write: { samples: 15, unit: usecs, min: 4492, max: 17012, sum: 149704, sumsq: 1617294336 } getattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } punch: { samples: 1, unit: usecs, min: 166, max: 166, sum: 166, sumsq: 27556 } sync: { samples: 2, unit: usecs, min: 1325, max: 1446, sum: 2771, sumsq: 3846541 } destroy: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } create: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } get_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } set_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } quotactl: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } prealloc: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: bash.0 snapshot_time: 1713409033.084800173 secs.nsecs start_time: 1713409033.058644798 secs.nsecs elapsed_time: 0.026155375 secs.nsecs read_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } write_bytes: { samples: 1, unit: bytes, min: 4, max: 4, sum: 4, sumsq: 16, hist: { 4: 1 } } read: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } write: { samples: 1, unit: usecs, min: 147, max: 147, sum: 147, sumsq: 21609 } getattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } punch: { samples: 1, unit: usecs, min: 175, max: 175, sum: 175, sumsq: 30625 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } destroy: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } create: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } get_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } set_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } quotactl: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } prealloc: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: id.205a.touch.20032 snapshot_time: 1713409140.126324601 secs.nsecs start_time: 1713409140.126310687 secs.nsecs elapsed_time: 0.000013914 secs.nsecs read_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } write_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } read: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } write: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setattr: { samples: 1, unit: usecs, min: 193, max: 193, sum: 193, sumsq: 37249 } punch: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } destroy: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } create: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } get_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } set_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } quotactl: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } prealloc: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: id.205a.dd.25124 snapshot_time: 1713409142.221761753 secs.nsecs start_time: 1713409142.125630980 secs.nsecs elapsed_time: 0.096130773 secs.nsecs read_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } write_bytes: { samples: 1, unit: bytes, min: 1048576, max: 1048576, sum: 1048576, sumsq: 1099511627776, hist: { 1M: 1 } } read: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } write: { samples: 1, unit: usecs, min: 6195, max: 6195, sum: 6195, sumsq: 38378025 } getattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } punch: { samples: 1, unit: usecs, min: 169, max: 169, sum: 169, sumsq: 28561 } sync: { samples: 1, unit: usecs, min: 1439, max: 1439, sum: 1439, sumsq: 2070721 } destroy: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } create: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } get_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } set_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } quotactl: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } prealloc: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: id.205a.dd.13487 snapshot_time: 1713409143.934320243 secs.nsecs start_time: 1713409143.934297293 secs.nsecs elapsed_time: 0.000022950 secs.nsecs read_bytes: { samples: 1, unit: bytes, min: 1048576, max: 1048576, sum: 1048576, sumsq: 1099511627776, hist: { 1M: 1 } } write_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } read: { samples: 1, unit: usecs, min: 6182, max: 6182, sum: 6182, sumsq: 38217124 } write: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } punch: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } destroy: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } create: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } get_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } set_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } quotactl: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } prealloc: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: id.205a.truncate.21452 snapshot_time: 1713409145.573920197 secs.nsecs start_time: 1713409145.573908397 secs.nsecs elapsed_time: 0.000011800 secs.nsecs read_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } write_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } read: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } write: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } punch: { samples: 1, unit: usecs, min: 1365, max: 1365, sum: 1365, sumsq: 1863225 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } destroy: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } create: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } get_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } set_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } quotactl: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } prealloc: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: S.root.touch.0.oleg146-client.v snapshot_time: 1713409157.595835277 secs.nsecs start_time: 1713409154.257418849 secs.nsecs elapsed_time: 3.338416428 secs.nsecs read_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } write_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } read: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } write: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setattr: { samples: 2, unit: usecs, min: 140, max: 142, sum: 282, sumsq: 39764 } punch: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } destroy: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } create: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } get_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } set_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } quotactl: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } prealloc: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: S.root.touch.0.oleg146-client.E snapshot_time: 1713409155.919783538 secs.nsecs start_time: 1713409155.919767615 secs.nsecs elapsed_time: 0.000015923 secs.nsecs read_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } write_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } read: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } write: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setattr: { samples: 1, unit: usecs, min: 142, max: 142, sum: 142, sumsq: 20164 } punch: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } destroy: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } create: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } get_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } set_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } quotactl: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } prealloc: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: "has\x20sp.touch.0" snapshot_time: 1713409169.251214304 secs.nsecs start_time: 1713409169.251201278 secs.nsecs elapsed_time: 0.000013026 secs.nsecs read_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } write_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } read: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } write: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setattr: { samples: 1, unit: usecs, min: 195, max: 195, sum: 195, sumsq: 38025 } punch: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } destroy: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } create: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } get_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } set_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } quotactl: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } prealloc: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: 205e.dd.0 snapshot_time: 1713409187.959988526 secs.nsecs start_time: 1713409187.228793342 secs.nsecs elapsed_time: 0.731195184 secs.nsecs read_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } write_bytes: { samples: 10, unit: bytes, min: 1048576, max: 1048576, sum: 10485760, sumsq: 10995116277760, hist: { 1M: 10 } } read: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } write: { samples: 10, unit: usecs, min: 5295, max: 7575, sum: 58805, sumsq: 349614033 } getattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } punch: { samples: 1, unit: usecs, min: 193, max: 193, sum: 193, sumsq: 37249 } sync: { samples: 10, unit: usecs, min: 926, max: 1348, sum: 12151, sumsq: 14887865 } destroy: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } create: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } get_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } set_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } quotactl: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } prealloc: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } obdfilter.lustre-OST0001.job_stats= job_stats: - job_id: lfs.0 snapshot_time: 1713409071.052915806 secs.nsecs start_time: 1713406266.070680060 secs.nsecs elapsed_time: 2804.982235746 secs.nsecs read_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } write_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } read: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } write: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setattr: { samples: 3, unit: usecs, min: 129, max: 236, sum: 499, sumsq: 90293 } punch: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } destroy: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } create: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 5, unit: usecs, min: 30, max: 36, sum: 162, sumsq: 5274 } get_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } set_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } quotactl: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } prealloc: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: cp.0 snapshot_time: 1713409016.766604506 secs.nsecs start_time: 1713406667.642436403 secs.nsecs elapsed_time: 2349.124168103 secs.nsecs read_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } write_bytes: { samples: 143, unit: bytes, min: 159, max: 4194304, sum: 575759774, sumsq: 2411235321407362, hist: { 256: 2, 8K: 2, 128K: 1, 1M: 1, 4M: 137 } } read: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } write: { samples: 143, unit: usecs, min: 149, max: 19934, sum: 1553677, sumsq: 18795505203 } getattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } punch: { samples: 5, unit: usecs, min: 125, max: 250, sum: 899, sumsq: 173331 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } destroy: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } create: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } get_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } set_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } quotactl: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } prealloc: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: touch.0 snapshot_time: 1713409079.906825227 secs.nsecs start_time: 1713406680.742851819 secs.nsecs elapsed_time: 2399.163973408 secs.nsecs read_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } write_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } read: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } write: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setattr: { samples: 22, unit: usecs, min: 131, max: 4983, sum: 13874, sumsq: 33607784 } punch: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } destroy: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } create: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } get_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } set_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } quotactl: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } prealloc: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: dd.0 snapshot_time: 1713409180.218097965 secs.nsecs start_time: 1713406768.467358528 secs.nsecs elapsed_time: 2411.750739437 secs.nsecs read_bytes: { samples: 4, unit: bytes, min: 4096, max: 4194304, sum: 10485760, sumsq: 39565272285184, hist: { 4K: 1, 2M: 1, 4M: 2 } } write_bytes: { samples: 33, unit: bytes, min: 4096, max: 4194304, sum: 97566720, sumsq: 405075008684032, hist: { 4K: 1, 16K: 6, 32K: 2, 1M: 1, 4M: 23 } } read: { samples: 4, unit: usecs, min: 82, max: 10788, sum: 25372, sumsq: 225186670 } write: { samples: 33, unit: usecs, min: 82, max: 18009, sum: 251971, sumsq: 2727739153 } getattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } punch: { samples: 5, unit: usecs, min: 125, max: 364, sum: 1155, sumsq: 300379 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } destroy: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } create: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } get_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } set_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } quotactl: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } prealloc: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: multiop.0 snapshot_time: 1713409059.042846440 secs.nsecs start_time: 1713406817.055545711 secs.nsecs elapsed_time: 2241.987300729 secs.nsecs read_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } write_bytes: { samples: 6, unit: bytes, min: 1000, max: 4194304, sum: 10492856, sumsq: 39582440377152, hist: { 1K: 1, 2K: 1, 4K: 1, 2M: 1, 4M: 2 } } read: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } write: { samples: 6, unit: usecs, min: 148, max: 23410, sum: 60877, sumsq: 1244331363 } getattr: { samples: 2, unit: usecs, min: 26, max: 26, sum: 52, sumsq: 1352 } setattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } punch: { samples: 1, unit: usecs, min: 9958, max: 9958, sum: 9958, sumsq: 99161764 } sync: { samples: 3, unit: usecs, min: 1030, max: 1322, sum: 3474, sumsq: 4067468 } destroy: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } create: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } get_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } set_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } quotactl: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } prealloc: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: bash.0 snapshot_time: 1713409033.152121704 secs.nsecs start_time: 1713408999.566148516 secs.nsecs elapsed_time: 33.585973188 secs.nsecs read_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } write_bytes: { samples: 1, unit: bytes, min: 4, max: 4, sum: 4, sumsq: 16, hist: { 4: 1 } } read: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } write: { samples: 1, unit: usecs, min: 232, max: 232, sum: 232, sumsq: 53824 } getattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } punch: { samples: 1, unit: usecs, min: 220, max: 220, sum: 220, sumsq: 48400 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } destroy: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } create: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } get_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } set_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } quotactl: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } prealloc: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: cmp.0 snapshot_time: 1713409019.552152336 secs.nsecs start_time: 1713409019.552139133 secs.nsecs elapsed_time: 0.000013203 secs.nsecs read_bytes: { samples: 1, unit: bytes, min: 1572864, max: 1572864, sum: 1572864, sumsq: 2473901162496, hist: { 2M: 1 } } write_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } read: { samples: 1, unit: usecs, min: 7017, max: 7017, sum: 7017, sumsq: 49238289 } write: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } punch: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } destroy: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } create: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } get_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } set_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } quotactl: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } prealloc: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } - job_id: .touch.0 snapshot_time: 1713409168.517271902 secs.nsecs start_time: 1713409168.517258125 secs.nsecs elapsed_time: 0.000013777 secs.nsecs read_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } write_bytes: { samples: 0, unit: bytes, min: 0, max: 0, sum: 0, sumsq: 0, hist: { } } read: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } write: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } getattr: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } setattr: { samples: 1, unit: usecs, min: 1504, max: 1504, sum: 1504, sumsq: 2262016 } punch: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } sync: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } destroy: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } create: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } statfs: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } get_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } set_info: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } quotactl: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } prealloc: { samples: 0, unit: usecs, min: 0, max: 0, sum: 0, sumsq: 0 } --- timestamp: 1713409189 top_jobs: - cp.0: {ops: 153, wr: 145, pu: 7, gi: 1} - dd.0: {ops: 69, op: 2, cl: 2, mn: 1, sa: 1, gx: 1, sy: 2, rd: 4, wr: 49, pu: 7} - touch.0: {ops: 33, sa: 33} - lfs.0: {ops: 28, op: 2, cl: 2, mn: 1, mk: 1, ga: 8, sa: 8, st: 6} - 205e.dd.0: {ops: 26, op: 1, cl: 1, sa: 1, gx: 1, sy: 11, wr: 10, pu: 1} - multiop.0: {ops: 12, ga: 2, sy: 3, wr: 6, pu: 1} - 205e.lfs.0: {ops: 10, op: 2, cl: 2, mn: 1, mk: 1, ga: 4} - rm.0: {ops: 8, op: 2, cl: 2, ul: 1, rm: 1, ga: 2} - .touch.0: {ops: 6, op: 1, cl: 1, mn: 1, ga: 1, sa: 2} - cmp.0: {ops: 4, rd: 4} - bash.0: {ops: 4, wr: 2, pu: 2} - id.205a.dd.25124: {ops: 3, sy: 1, wr: 1, pu: 1} - root.lfs.0: {ops: 2, mk: 1, ga: 1} - mv.0: {ops: 2, mv: 1, ga: 1} - cat.0: {ops: 2, rd: 2} - S.root.touch.0.oleg146-client.v: {ops: 2, sa: 2} - stat.0: {ops: 1, st: 1} - chown.0: {ops: 1, sa: 1} - id.205a.touch.20032: {ops: 1, sa: 1} - id.205a.dd.13487: {ops: 1, rd: 1} - id.205a.truncate.21452: {ops: 1, pu: 1} - S.root.touch.0.oleg146-client.E: {ops: 1, sa: 1} - has sp.touch.0: {ops: 1, sa: 1} ... jobid_name=%e.%u jobid_var=procname_uid PASS 205e (7s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 205f: verify qos_ost_weights YAML format == 22:59:55 (1713409195) - { mdt_idx: 1, tgt_weight: 0, tgt_penalty: 0, tgt_penalty_per_obj: 0, tgt_avail: 0, tgt_last_used: 0, svr_nid: 192.168.201.146@tcp, svr_bavail: 0, svr_iavail: 0, svr_penalty: 0, svr_penalty_per_obj: 0, svr_last_used: 0 } - { ost_idx: 0, tgt_weight: 0, tgt_penalty: 0, tgt_penalty_per_obj: 1460714, tgt_avail: 0, tgt_last_used: 0, svr_nid: 192.168.201.146@tcp, svr_bavail: 28794208, svr_iavail: 1, svr_penalty: 0, svr_penalty_per_obj: 731102, svr_last_used: 0 } - { ost_idx: 1, tgt_weight: 0, tgt_penalty: 0, tgt_penalty_per_obj: 1463696, tgt_avail: 0, tgt_last_used: 0, svr_nid: 192.168.201.146@tcp, svr_bavail: 28794208, svr_iavail: 1, svr_penalty: 0, svr_penalty_per_obj: 731102, svr_last_used: 0 } - { mdt_idx: 0, tgt_weight: 0, tgt_penalty: 0, tgt_penalty_per_obj: 0, tgt_avail: 0, tgt_last_used: 0, svr_nid: 192.168.201.146@tcp, svr_bavail: 0, svr_iavail: 0, svr_penalty: 0, svr_penalty_per_obj: 0, svr_last_used: 0 } - { ost_idx: 0, tgt_weight: 0, tgt_penalty: 0, tgt_penalty_per_obj: 1460714, tgt_avail: 0, tgt_last_used: 0, svr_nid: 192.168.201.146@tcp, svr_bavail: 28794208, svr_iavail: 1, svr_penalty: 0, svr_penalty_per_obj: 731102, svr_last_used: 0 } - { ost_idx: 1, tgt_weight: 0, tgt_penalty: 0, tgt_penalty_per_obj: 1463696, tgt_avail: 0, tgt_last_used: 0, svr_nid: 192.168.201.146@tcp, svr_bavail: 28794208, svr_iavail: 1, svr_penalty: 0, svr_penalty_per_obj: 731102, svr_last_used: 0 } PASS 205f (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 205g: stress test for job_stats procfile == 23:00:00 (1713409200) mdt.lustre-MDT0000.job_cleanup_interval=5 mdt.lustre-MDT0001.job_cleanup_interval=5 jobid_var=TEST205G_ID jobid_name=%j.%p mdt.lustre-MDT0000.job_stats=clear mdt.lustre-MDT0001.job_stats=clear /home/green/git/lustre-release/lustre/tests/test-framework.sh: line 4540: 15272 Terminated while true; do printf $DIR/$tfile.{0001..1000} | xargs -P10 -n1 touch; done (wd: ~) /home/green/git/lustre-release/lustre/tests/test-framework.sh: line 4540: 15273 Terminated __test_205_jobstats_dump 4 (wd: ~) jobid_name=%e.%u jobid_var=procname_uid mdt.lustre-MDT0000.job_cleanup_interval=600 mdt.lustre-MDT0001.job_cleanup_interval=600 PASS 205g (93s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 205h: check jobid xattr is stored correctly ========================================================== 23:01:35 (1713409295) mdt.lustre-MDT0000.job_xattr=user.job mdt.lustre-MDT0001.job_xattr=user.job jobid_var=procname.uid striped dir -i1 -c2 -H crush2 /mnt/lustre/d205h.sanity getfattr: Removing leading '/' from absolute path names getfattr: Removing leading '/' from absolute path names mdt.lustre-MDT0000.job_xattr=NONE mdt.lustre-MDT0001.job_xattr=NONE mdt.lustre-MDT0000.job_xattr=trusted.job mdt.lustre-MDT0001.job_xattr=trusted.job getfattr: Removing leading '/' from absolute path names jobid_var=procname_uid mdt.lustre-MDT0000.job_xattr=user.job mdt.lustre-MDT0001.job_xattr=user.job PASS 205h (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 205i: check job_xattr parameter accepts and rejects values correctly ========================================================== 23:01:41 (1713409301) mdt.lustre-MDT0000.job_xattr=user.1234567 mdt.lustre-MDT0001.job_xattr=user.1234567 oleg146-server: error: set_param: setting /sys/fs/lustre/mdt/lustre-MDT0000/job_xattr=user.12345678: Invalid argument oleg146-server: error: set_param: setting /sys/fs/lustre/mdt/lustre-MDT0001/job_xattr=user.12345678: Invalid argument oleg146-server: error: set_param: setting 'mdt/*/job_xattr'='user.12345678': Invalid argument pdsh@oleg146-client: oleg146-server: ssh exited with exit code 22 oleg146-server: error: set_param: setting /sys/fs/lustre/mdt/lustre-MDT0000/job_xattr=userjob: Invalid argument oleg146-server: error: set_param: setting /sys/fs/lustre/mdt/lustre-MDT0001/job_xattr=userjob: Invalid argument oleg146-server: error: set_param: setting 'mdt/*/job_xattr'='userjob': Invalid argument pdsh@oleg146-client: oleg146-server: ssh exited with exit code 22 oleg146-server: error: set_param: setting /sys/fs/lustre/mdt/lustre-MDT0000/job_xattr=user.job/: Invalid argument oleg146-server: error: set_param: setting /sys/fs/lustre/mdt/lustre-MDT0001/job_xattr=user.job/: Invalid argument oleg146-server: error: set_param: setting 'mdt/*/job_xattr'='user.job/': Invalid argument pdsh@oleg146-client: oleg146-server: ssh exited with exit code 22 oleg146-server: error: set_param: setting /sys/fs/lustre/mdt/lustre-MDT0000/job_xattr=user.job€: Invalid argument oleg146-server: error: set_param: setting /sys/fs/lustre/mdt/lustre-MDT0001/job_xattr=user.job€: Invalid argument oleg146-server: error: set_param: setting 'mdt/*/job_xattr'='user.job€': Invalid argument pdsh@oleg146-client: oleg146-server: ssh exited with exit code 123 mdt.lustre-MDT0000.job_xattr=user.job mdt.lustre-MDT0001.job_xattr=user.job PASS 205i (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 206: fail lov_init_raid0() doesn't lbug === 23:01:47 (1713409307) fail_loc=0xa0001403 fail_val=1 PASS 206 (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 207a: can refresh layout at glimpse ======= 23:01:51 (1713409311) 5+0 records in 5+0 records out 20480 bytes (20 kB) copied, 0.0112578 s, 1.8 MB/s fail_loc=0x170 PASS 207a (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 207b: can refresh layout at open ========== 23:01:54 (1713409314) 5+0 records in 5+0 records out 20480 bytes (20 kB) copied, 0.00853327 s, 2.4 MB/s fail_loc=0x171 checksum is daa100df6e6711906b61c9ab5aa16032 /mnt/lustre/f207b.sanity PASS 207b (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 208: Exclusive open ======================= 23:01:58 (1713409318) ==== test 1: verify get lease work read lease(1) has applied. ==== test 2: verify lease can be broken by upcoming open no lease applied. ==== test 3: verify lease can't be granted if an open already exists multiop: cannot get READ lease, ext 0: Device or resource busy (16) multiop: apply/unlock lease error: Device or resource busy ==== test 4: lease can sustain over recovery Failing mds1 on oleg146-server Stopping /mnt/lustre-mds1 (opts:) on oleg146-server 23:02:06 (1713409326) shut down Failover mds1 to oleg146-server mount facets: mds1 Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg146-server: oleg146-server.virtnet: executing set_default_debug all all pdsh@oleg146-client: oleg146-server: ssh exited with exit code 1 Started lustre-MDT0000 23:02:20 (1713409340) targets are mounted 23:02:20 (1713409340) facet_failover done oleg146-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec read lease(1) has applied. ==== test 5: lease broken can't be regained by replay Failing mds1 on oleg146-server Stopping /mnt/lustre-mds1 (opts:) on oleg146-server 23:02:30 (1713409350) shut down Failover mds1 to oleg146-server mount facets: mds1 Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg146-server: oleg146-server.virtnet: executing set_default_debug all all pdsh@oleg146-client: oleg146-server: ssh exited with exit code 1 Started lustre-MDT0000 23:02:44 (1713409364) targets are mounted 23:02:44 (1713409364) facet_failover done oleg146-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec no lease applied. PASS 208 (54s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 209: read-only open/close requests should be freed promptly ========================================================== 23:02:54 (1713409374) before: 23, after: 24 PASS 209 (11s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 210: lfs getstripe does not break leases == 23:03:07 (1713409387) /mnt/lustre/f210.sanity lmm_stripe_count: 1 lmm_stripe_size: 4194304 lmm_pattern: raid0 lmm_layout_gen: 0 lmm_stripe_offset: 1 obdidx objid objid group 1 35554 0x8ae2 0x2c0000403 write lease(2) has applied. /mnt/lustre/f210.sanity lmm_stripe_count: 1 lmm_stripe_size: 4194304 lmm_pattern: raid0 lmm_layout_gen: 0 lmm_stripe_offset: 1 obdidx objid objid group 1 35554 0x8ae2 0x2c0000403 read lease(1) has applied. PASS 210 (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 211: failed mirror split doesn't break write lease ========================================================== 23:03:14 (1713409394) 10+0 records in 10+0 records out 40960 bytes (41 kB) copied, 0.0611403 s, 670 kB/s 1+0 records in 1+0 records out 4096 bytes (4.1 kB) copied, 0.0122809 s, 334 kB/s /mnt/lustre/f211.sanity lcm_layout_gen: 2 lcm_mirror_count: 2 lcm_entry_count: 2 lcme_id: 65537 lcme_mirror_id: 1 lcme_flags: init lcme_extent.e_start: 0 lcme_extent.e_end: EOF lmm_stripe_count: 1 lmm_stripe_size: 4194304 lmm_pattern: raid0 lmm_layout_gen: 0 lmm_stripe_offset: 0 lmm_objects: - 0: { l_ost_idx: 0, l_fid: [0x280000bd1:0x8723:0x0] } lcme_id: 131073 lcme_mirror_id: 2 lcme_flags: init,stale lcme_extent.e_start: 0 lcme_extent.e_end: EOF lmm_stripe_count: 1 lmm_stripe_size: 4194304 lmm_pattern: raid0 lmm_layout_gen: 0 lmm_stripe_offset: 1 lmm_objects: - 0: { l_ost_idx: 1, l_fid: [0x2c0000403:0x8ae3:0x0] } lfs mirror split: cannot destroy the last non-stale mirror of file '/mnt/lustre/f211.sanity' write lease(2) has applied. PASS 211 (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 212: Sendfile test ====================================================================================================== 23:03:19 (1713409399) 3449+0 records in 3449+0 records out 3531776 bytes (3.5 MB) copied, 0.5656 s, 6.2 MB/s PASS 212 (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 213: OSC lock completion and cancel race don't crash - bug 18829 ========================================================== 23:03:25 (1713409405) 4+0 records in 4+0 records out 16384 bytes (16 kB) copied, 0.0115406 s, 1.4 MB/s fail_loc=0x8000040f PASS 213 (13s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 214: hash-indexed directory test - bug 20133 ========================================================== 23:03:40 (1713409420) total 20 drwxr-xr-x 2 root root 20480 Apr 17 23:03 d214c a0 a1 a10 a100 a101 a102 a103 a104 a105 a106 a107 a108 a109 a11 a110 a111 a112 a113 a114 a115 a116 a117 a118 a119 a12 a120 a121 a122 a123 a124 a125 a126 a127 a128 a129 a13 a130 a131 a132 a133 a134 a135 a136 a137 a138 a139 a14 a140 a141 a142 a143 a144 a145 a146 a147 a148 a149 a15 a150 a151 a152 a153 a154 a155 a156 a157 a158 a159 a16 a160 a161 a162 a163 a164 a165 a166 a167 a168 a169 a17 a170 a171 a172 a173 a174 a175 a176 a177 a178 a179 a18 a180 a181 a182 a183 a184 a185 a186 a187 a188 a189 a19 a190 a191 a192 a193 a194 a195 a196 a197 a198 a199 a2 a20 a200 a201 a202 a203 a204 a205 a206 a207 a208 a209 a21 a210 a211 a212 a213 a214 a215 a216 a217 a218 a219 a22 a220 a221 a222 a223 a224 a225 a226 a227 a228 a229 a23 a230 a231 a232 a233 a234 a235 a236 a237 a238 a239 a24 a240 a241 a242 a243 a244 a245 a246 a247 a248 a249 a25 a250 a251 a252 a253 a254 a255 a256 a257 a258 a259 a26 a260 a261 a262 a263 a264 a265 a266 a267 a268 a269 a27 a270 a271 a272 a273 a274 a275 a276 a277 a278 a279 a28 a280 a281 a282 a283 a284 a285 a286 a287 a288 a289 a29 a290 a291 a292 a293 a294 a295 a296 a297 a298 a299 a3 a30 a300 a301 a302 a303 a304 a305 a306 a307 a308 a309 a31 a310 a311 a312 a313 a314 a315 a316 a317 a318 a319 a32 a320 a321 a322 a323 a324 a325 a326 a327 a328 a329 a33 a330 a331 a332 a333 a334 a335 a336 a337 a338 a339 a34 a35 a36 a37 a38 a39 a4 a40 a41 a42 a43 a44 a45 a46 a47 a48 a49 a5 a50 a51 a52 a53 a54 a55 a56 a57 a58 a59 a6 a60 a61 a62 a63 a64 a65 a66 a67 a68 a69 a7 a70 a71 a72 a73 a74 a75 a76 a77 a78 a79 a8 a80 a81 a82 a83 a84 a85 a86 a87 a88 a89 a9 a90 a91 a92 a93 a94 a95 a96 a97 a98 a99 PASS 214 (14s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 215: lnet exists and has proper content - bugs 18102, 21079, 21517 ========================================================== 23:03:56 (1713409436) 0 409 0 469958 469957 0 0 1154936320 955845536 0 0 PASS 215 (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 216: check lockless direct write updates file size and kms correctly ========================================================== 23:04:01 (1713409441) error: get_param: param_path 'osc/*/contention_seconds': No such file or directory error: set_param: param_path 'osc/*/contention_seconds': No such file or directory error: set_param: setting 'osc/*/contention_seconds'='60': No such file or directory directio on /mnt/lustre/f216.sanity for 10x4096 bytes PASS /mnt/lustre/f216.sanity has size 40960 OK error: set_param: param_path 'osc/*/contention_seconds': No such file or directory error: set_param: setting 'osc/*/contention_seconds'='0': No such file or directory 0+0 records in 0+0 records out 0 bytes (0 B) copied, 0.00353601 s, 0.0 kB/s /mnt/lustre/f216.sanity has size 0 OK error: set_param: setting : Invalid argument PASS 216 (8s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 217: check lctl ping for hostnames with embedded hyphen ('-') ========================================================== 23:04:12 (1713409452) node: 'oleg146-client.virtnet', nid: '192.168.201.46', node_ip='192.168.201.46' lctl ping node oleg146-client.virtnet@tcp 12345-0@lo 192.168.201.46@tcp node: 'oleg146-server', nid: '192.168.201.146', node_ip='192.168.201.146' lctl ping node oleg146-server@tcp 12345-0@lo 192.168.201.146@tcp PASS 217 (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 218: parallel read and truncate should not deadlock ========================================================== 23:04:17 (1713409457) creating a 10 Mb file starting reads truncating the file 2560+0 records in 2560+0 records out 10485760 bytes (10 MB) copied, 0.291017 s, 36.0 MB/s killing dd wait until dd is finished removing the temporary file PASS 218 (17s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 219: LU-394: Write partial won't cause uncontiguous pages vec at LND ========================================================== 23:04:36 (1713409476) 1+0 records in 1+0 records out 1024 bytes (1.0 kB) copied, 0.0115378 s, 88.8 kB/s fail_loc=0x411 1+0 records in 1+0 records out 4096 bytes (4.1 kB) copied, 0.0165704 s, 247 kB/s fail_loc=0 1+0 records in 1+0 records out 4096 bytes (4.1 kB) copied, 0.00166248 s, 2.5 MB/s fail_loc=0x411 1+0 records in 1+0 records out 1024 bytes (1.0 kB) copied, 0.000573987 s, 1.8 MB/s 1+0 records in 1+0 records out 1024 bytes (1.0 kB) copied, 0.0127748 s, 80.2 kB/s /mnt/lustre/f219.sanity-2 has size 1024 OK PASS 219 (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 220: preallocated MDS objects still used if ENOSPC from OST ========================================================== 23:04:41 (1713409481) UUID Inodes IUsed IFree IUse% Mounted on lustre-MDT0000_UUID 1024000 4986 1019014 1% /mnt/lustre[MDT:0] lustre-MDT0001_UUID 1024000 967 1023033 1% /mnt/lustre[MDT:1] lustre-OST0000_UUID 262144 11124 251020 5% /mnt/lustre[OST:0] lustre-OST0001_UUID 262144 11060 251084 5% /mnt/lustre[OST:1] filesystem_summary: 508057 5953 502104 2% /mnt/lustre fail_val=-1 fail_loc=0x229 oleg146-server: Pool lustre.test_220 created oleg146-server: OST lustre-OST0000_UUID added to pool lustre.test_220 preallocated objects on MDS is 15 (34785 - 34770) OST still has 0 kbytes free create 15 files @next_id... total: 15 open/close in 0.14 seconds: 107.56 ops/second after creation, last_id=34785, next_id=34785 UUID Inodes IUsed IFree IUse% Mounted on lustre-MDT0000_UUID 1024000 4990 1019010 1% /mnt/lustre[MDT:0] lustre-MDT0001_UUID 1024000 967 1023033 1% /mnt/lustre[MDT:1] lustre-OST0000_UUID 11124 11124 0 100% /mnt/lustre[OST:0] lustre-OST0001_UUID 11060 11060 0 100% /mnt/lustre[OST:1] filesystem_summary: 5957 5957 0 100% /mnt/lustre cleanup... fail_val=0 fail_loc=0 oleg146-server: OST lustre-OST0000_UUID removed from pool lustre.test_220 oleg146-server: Pool lustre.test_220 destroyed unlink 15 files @34770... - unlinked 0 (time 1713409494 ; total 0 ; last 0) total: 15 unlinks in 0 seconds: inf unlinks/second Destroy the created pools: test_220 PASS 220 (15s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 221: make sure fault and truncate race to not cause OOM ========================================================== 23:04:58 (1713409498) 121+1 records in 121+1 records out 62200 bytes (62 kB) copied, 1.12886 s, 55.1 kB/s fail_loc=0x80001401 PASS 221 (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 222a: AGL for ls should not trigger CLIO lock failure ========================================================== 23:05:04 (1713409504) striped dir -i0 -c2 -H crush /mnt/lustre/d222a.sanity total: 10 open/close in 0.10 seconds: 101.62 ops/second fail_loc=0x31a fail_loc=0 PASS 222a (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 222b: AGL for rmdir should not trigger CLIO lock failure ========================================================== 23:05:10 (1713409510) striped dir -i0 -c2 -H fnv_1a_64 /mnt/lustre/d222b.sanity total: 10 open/close in 0.10 seconds: 102.95 ops/second fail_loc=0x31a fail_loc=0 PASS 222b (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 223: osc reenqueue if without AGL lock granted ================================================================================= 23:05:15 (1713409515) striped dir -i1 -c2 -H crush /mnt/lustre/d223.sanity total: 10 open/close in 0.10 seconds: 99.96 ops/second fail_loc=0x31b fail_loc=0 PASS 223 (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 224a: Don't panic on bulk IO failure ====== 23:05:20 (1713409520) fail_loc=0x508 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 2.07839 s, 505 kB/s fail_loc=0 Filesystem 1K-blocks Used Available Use% Mounted on 192.168.201.146@tcp:/lustre 7666232 16280 7196712 1% /mnt/lustre PASS 224a (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 224b: Don't panic on bulk IO failure ====== 23:05:27 (1713409527) 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0181983 s, 57.6 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0486057 s, 21.6 MB/s at_max=0 at_max=0 fail_val=3 fail_loc=0x80000515 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 3.0781 s, 341 kB/s fail_loc=0 Filesystem 1K-blocks Used Available Use% Mounted on 192.168.201.146@tcp:/lustre 7666232 16280 7196712 1% /mnt/lustre at_max=600 at_max=600 PASS 224b (8s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 224c: Don't hang if one of md lost during large bulk RPC ========================================================== 23:05:37 (1713409537) Setting lustre.sys.at_max from 600 to 0 Waiting 90s for '0' Updated after 2s: want '0' got '0' Setting lustre.sys.timeout from 20 to 5 Waiting 90s for '5' fail_loc=0x520 1+0 records in 1+0 records out 8000000 bytes (8.0 MB) copied, 0.199262 s, 40.1 MB/s fail_loc=0 Setting lustre.sys.at_max from 0 to 600 Waiting 90s for '600' Setting lustre.sys.timeout from 5 to 20 Waiting 90s for '20' Updated after 2s: want '20' got '20' PASS 224c (19s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 224d: Don't corrupt data on bulk IO timeout ========================================================== 23:05:58 (1713409558) 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0181734 s, 57.7 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0495487 s, 21.2 MB/s at_max=0 at_max=0 fail_val=22 fail_loc=0x80000515 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 21.2935 s, 49.2 kB/s fail_loc=0 Filesystem 1K-blocks Used Available Use% Mounted on 192.168.201.146@tcp:/lustre 7666232 18328 7195712 1% /mnt/lustre at_max=600 at_max=600 PASS 224d (27s) debug_raw_pointers=0 debug_raw_pointers=0 SKIP: sanity test_225a skipping excluded test 225a (base 225) SKIP: sanity test_225b skipping excluded test 225b (base 225) debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 226a: call path2fid and fid2path on files of all type ========================================================== 23:06:29 (1713409589) pass with /mnt/lustre/d226a.sanity/fifo and 0x240001b74:0x1b9d:0x0 pass with /mnt/lustre/d226a.sanity/null and 0x240001b74:0x1b9e:0x0 pass with /mnt/lustre/d226a.sanity/none and 0x240001b74:0x1b9f:0x0 pass with /mnt/lustre/d226a.sanity/dir and 0x200002b11:0x774c:0x0 pass with /mnt/lustre/d226a.sanity/loop0 and 0x240001b74:0x1ba0:0x0 pass with /mnt/lustre/d226a.sanity/file and 0x240001b74:0x1ba1:0x0 pass with /mnt/lustre/d226a.sanity/link and 0x240001b74:0x1ba2:0x0 pass with /mnt/lustre/d226a.sanity/sock and 0x240001b74:0x1ba3:0x0 PASS 226a (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 226b: call path2fid and fid2path on files of all type under remote dir ========================================================== 23:06:34 (1713409594) pass with /mnt/lustre/d226b.sanity/remote_dir/fifo and 0x240001b74:0x1ba6:0x0 pass with /mnt/lustre/d226b.sanity/remote_dir/null and 0x240001b74:0x1ba7:0x0 pass with /mnt/lustre/d226b.sanity/remote_dir/none and 0x240001b74:0x1ba8:0x0 pass with /mnt/lustre/d226b.sanity/remote_dir/dir and 0x240001b74:0x1ba9:0x0 pass with /mnt/lustre/d226b.sanity/remote_dir/loop0 and 0x240001b74:0x1baa:0x0 pass with /mnt/lustre/d226b.sanity/remote_dir/file and 0x240001b74:0x1bab:0x0 pass with /mnt/lustre/d226b.sanity/remote_dir/link and 0x240001b74:0x1bac:0x0 pass with /mnt/lustre/d226b.sanity/remote_dir/sock and 0x240001b74:0x1bad:0x0 PASS 226b (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 226c: call path2fid and fid2path under remote dir with subdir mount ========================================================== 23:06:39 (1713409599) PASS 226c (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 226d: verify fid2path with -n and -fn option ========================================================== 23:06:44 (1713409604) PASS 226d (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 226e: Verify path2fid -0 option with newline and space ========================================================== 23:06:49 (1713409609) PASS 226e (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 227: running truncated executable does not cause OOM ========================================================== 23:06:54 (1713409614) 1+0 records in 1+0 records out 1024 bytes (1.0 kB) copied, 0.00800097 s, 128 kB/s /home/green/git/lustre-release/lustre/tests/sanity.sh: line 21872: 1442 Segmentation fault $MOUNT/date > /dev/null PASS 227 (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 228a: try to reuse idle OI blocks ========= 23:06:59 (1713409619) fail_loc=0x80001002 - open/close 2384 (time 1713409630.81 total 10.00 last 238.39) - open/close 4778 (time 1713409640.82 total 20.00 last 239.32) - open/close 7173 (time 1713409650.82 total 30.01 last 239.44) - open/close 9551 (time 1713409660.82 total 40.01 last 237.74) total: 10000 open/close in 41.90 seconds: 238.68 ops/second fail_loc=0 oleg146-server: debugfs 1.46.2.wc5 (26-Mar-2022) oleg146-server: /dev/mapper/mds1_flakey: catastrophic mode - not reading inode or group bitmaps - unlinked 0 (time 1713409670 ; total 0 ; last 0) total: 10000 unlinks in 23 seconds: 434.782623 unlinks/second total: 2000 open/close in 8.76 seconds: 228.38 ops/second oleg146-server: debugfs 1.46.2.wc5 (26-Mar-2022) oleg146-server: /dev/mapper/mds1_flakey: catastrophic mode - not reading inode or group bitmaps PASS 228a (93s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 228b: idle OI blocks can be reused after MDT restart ========================================================== 23:08:34 (1713409714) fail_loc=0x80001002 - open/close 2281 (time 1713409726.35 total 10.00 last 228.09) - open/close 4701 (time 1713409736.35 total 20.00 last 241.92) - open/close 7188 (time 1713409746.35 total 30.01 last 248.65) - open/close 9563 (time 1713409756.35 total 40.01 last 237.47) total: 10000 open/close in 41.86 seconds: 238.89 ops/second fail_loc=0 oleg146-server: debugfs 1.46.2.wc5 (26-Mar-2022) oleg146-server: /dev/mapper/mds1_flakey: catastrophic mode - not reading inode or group bitmaps - unlinked 0 (time 1713409766 ; total 0 ; last 0) total: 10000 unlinks in 22 seconds: 454.545441 unlinks/second Stopping /mnt/lustre-mds1 (opts:) on oleg146-server Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg146-server: oleg146-server.virtnet: executing set_default_debug all all pdsh@oleg146-client: oleg146-server: ssh exited with exit code 1 Started lustre-MDT0000 total: 2000 open/close in 8.05 seconds: 248.33 ops/second oleg146-server: debugfs 1.46.2.wc5 (26-Mar-2022) oleg146-server: /dev/mapper/mds1_flakey: catastrophic mode - not reading inode or group bitmaps PASS 228b (102s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 228c: NOT shrink the last entry in OI index node to recycle idle leaf ========================================================== 23:10:18 (1713409818) fail_loc=0x80001002 - open/close 2373 (time 1713409830.00 total 10.00 last 237.24) - open/close 4753 (time 1713409840.00 total 20.00 last 238.00) - open/close 7215 (time 1713409850.00 total 30.01 last 246.11) - open/close 9599 (time 1713409860.01 total 40.01 last 238.31) - open/close 12034 (time 1713409870.01 total 50.01 last 243.47) - open/close 14376 (time 1713409880.01 total 60.01 last 234.20) - open/close 16798 (time 1713409890.01 total 70.01 last 242.13) - open/close 19199 (time 1713409900.01 total 80.02 last 240.06) total: 20000 open/close in 83.47 seconds: 239.61 ops/second fail_loc=0 oleg146-server: debugfs 1.46.2.wc5 (26-Mar-2022) oleg146-server: /dev/mapper/mds1_flakey: catastrophic mode - not reading inode or group bitmaps - unlinked 0 (time 1713409911 ; total 0 ; last 0) - unlinked 10000 (time 1713409934 ; total 23 ; last 23) total: 20000 unlinks in 45 seconds: 444.444458 unlinks/second total: 2000 open/close in 8.56 seconds: 233.67 ops/second oleg146-server: debugfs 1.46.2.wc5 (26-Mar-2022) oleg146-server: /dev/mapper/mds1_flakey: catastrophic mode - not reading inode or group bitmaps PASS 228c (157s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 229: getstripe/stat/rm/attr changes work on released files ========================================================== 23:12:57 (1713409977) /mnt/lustre/f229.sanity lmm_magic: 0x0BD10BD0 lmm_seq: 0x200003ab1 lmm_object_id: 0x2ee2 lmm_fid: [0x200003ab1:0x2ee2:0x0] lmm_stripe_count: 2 lmm_stripe_size: 4194304 lmm_pattern: released lmm_layout_gen: 0 lmm_stripe_offset: 0 File: '/mnt/lustre/f229.sanity' Size: 0 Blocks: 0 IO Block: 4194304 regular empty file Device: 2c54f966h/743766374d Inode: 144115440153538274 Links: 1 Access: (0644/-rw-r--r--) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2024-04-17 23:12:58.000000000 -0400 Modify: 2024-04-17 23:12:58.000000000 -0400 Change: 2024-04-17 23:12:58.000000000 -0400 Birth: - PASS 229 (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 230a: Create remote directory and files under the remote directory ========================================================== 23:13:02 (1713409982) striped dir -i0 -c2 -H crush2 /mnt/lustre/d230a.sanity striped dir -i0 -c1 -H crush /mnt/lustre/d230a.sanity/test_230_local total: 10 open/close in 0.09 seconds: 110.36 ops/second PASS 230a (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 230b: migrate directory =================== 23:13:07 (1713409987) striped dir -i0 -c2 -H crush /mnt/lustre/d230b.sanity striped dir -i0 -c1 -H crush /mnt/lustre/d230b.sanity/migrate_dir striped dir -i0 -c1 -H fnv_1a_64 /mnt/lustre/d230b.sanity/other_dir total: 10 open/close in 0.10 seconds: 99.92 ops/second total: 10 open/close in 0.09 seconds: 105.97 ops/second total: 10 open/close in 0.10 seconds: 103.95 ops/second total: 10 open/close in 0.10 seconds: 103.99 ops/second total: 10 open/close in 0.10 seconds: 103.65 ops/second total: 10 open/close in 0.10 seconds: 103.04 ops/second total: 10 open/close in 0.09 seconds: 105.83 ops/second total: 10 open/close in 0.09 seconds: 106.85 ops/second total: 10 open/close in 0.10 seconds: 104.54 ops/second total: 10 open/close in 0.10 seconds: 103.80 ops/second migratate to MDT1, then checking.. lsattr: Operation not supported While reading flags on /mnt/lustre/d230b.sanity/migrate_dir/59char_ln lsattr: Operation not supported While reading flags on /mnt/lustre/d230b.sanity/migrate_dir/4094char_ln lsattr: Operation not supported While reading flags on /mnt/lustre/d230b.sanity/migrate_dir/f230b.sanity_ln lsattr: Operation not supported While reading flags on /mnt/lustre/d230b.sanity/migrate_dir/60char_ln lsattr: Operation not supported While reading flags on /mnt/lustre/d230b.sanity/migrate_dir/f230b.sanity_ln_other lsattr: Operation not supported While reading flags on /mnt/lustre/d230b.sanity/migrate_dir/4095char_ln lsattr: Operation not supported While reading flags on /mnt/lustre/d230b.sanity/migrate_dir/58char_ln migrate back to MDT0, checking.. lfs getstripe: llapi_semantic_traverse: Failed to open '/mnt/lustre/d230b.sanity/migrate_dir/4095char_ln': File name too long (36) lfs: getstripe for '/mnt/lustre/d230b.sanity/migrate_dir/4095char_ln' failed: File name too long /home/green/git/lustre-release/lustre/tests/sanity.sh: line 22222: [: -ne: unary operator expected lfs getstripe: llapi_semantic_traverse: Failed to open '/mnt/lustre/d230b.sanity/migrate_dir/4094char_ln': File name too long (36) lfs: getstripe for '/mnt/lustre/d230b.sanity/migrate_dir/4094char_ln' failed: File name too long /home/green/git/lustre-release/lustre/tests/sanity.sh: line 22222: [: -ne: unary operator expected lsattr: Operation not supported While reading flags on /mnt/lustre/d230b.sanity/migrate_dir/f230b.sanity_ln lsattr: Operation not supported While reading flags on /mnt/lustre/d230b.sanity/migrate_dir/60char_ln lsattr: Operation not supported While reading flags on /mnt/lustre/d230b.sanity/migrate_dir/59char_ln lsattr: Operation not supported While reading flags on /mnt/lustre/d230b.sanity/migrate_dir/58char_ln lsattr: Operation not supported While reading flags on /mnt/lustre/d230b.sanity/migrate_dir/4095char_ln lsattr: Operation not supported While reading flags on /mnt/lustre/d230b.sanity/migrate_dir/f230b.sanity_ln_other lsattr: Operation not supported While reading flags on /mnt/lustre/d230b.sanity/migrate_dir/4094char_ln PASS 230b (23s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 230c: check directory accessiblity if migration failed ========================================================== 23:13:33 (1713410013) striped dir -i0 -c2 -H fnv_1a_64 /mnt/lustre/d230c.sanity striped dir -i0 -c1 -H all_char /mnt/lustre/d230c.sanity/migrate_dir striped dir -i1 -c1 -H fnv_1a_64 /mnt/lustre/d230c.sanity/remote_dir File: '/mnt/lustre/d230c.sanity/migrate_dir' Size: 4096 Blocks: 8 IO Block: 1048576 directory Device: 2c54f966h/743766374d Inode: 144115440153538538 Links: 2 Access: (0755/drwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2024-04-17 23:13:33.000000000 -0400 Modify: 2024-04-17 23:13:33.000000000 -0400 Change: 2024-04-17 23:13:33.000000000 -0400 Birth: - total: 3 open/close in 0.03 seconds: 94.40 ops/second fail_loc=0x1801 lfs migrate: /mnt/lustre/d230c.sanity/migrate_dir/f0 migrate failed: Input/output error (5) Error in opening file "/mnt/lustre/d230c.sanity/migrate_dir/f0"(flags=O_CREAT) 17: File exists mknod(S_IFREG|0644, 0): File exists ln: failed to create hard link '/mnt/lustre/d230c.sanity/migrate_dir/f0': File exists mv: cannot move '/mnt/lustre/d230c.sanity/remote_dir/f230c.sanity' to '/mnt/lustre/d230c.sanity/migrate_dir/f0': File exists Error in opening file "/mnt/lustre/d230c.sanity/migrate_dir/f1"(flags=O_CREAT) 17: File exists mknod(S_IFREG|0644, 0): File exists ln: failed to create hard link '/mnt/lustre/d230c.sanity/migrate_dir/f1': File exists mv: cannot move '/mnt/lustre/d230c.sanity/remote_dir/f230c.sanity' to '/mnt/lustre/d230c.sanity/migrate_dir/f1': File exists Error in opening file "/mnt/lustre/d230c.sanity/migrate_dir/f2"(flags=O_CREAT) 17: File exists mknod(S_IFREG|0644, 0): File exists ln: failed to create hard link '/mnt/lustre/d230c.sanity/migrate_dir/f2': File exists mv: cannot move '/mnt/lustre/d230c.sanity/remote_dir/f230c.sanity' to '/mnt/lustre/d230c.sanity/migrate_dir/f2': File exists Error in opening file "/mnt/lustre/d230c.sanity/migrate_dir/file"(flags=O_CREAT) 17: File exists mknod(S_IFREG|0644, 0): File exists ln: failed to create hard link '/mnt/lustre/d230c.sanity/migrate_dir/file': File exists lfs migrate: /mnt/lustre/d230c.sanity/migrate_dir migrate failed: Operation not permitted (1) lfs migrate: /mnt/lustre/d230c.sanity/migrate_dir migrate failed: Operation not permitted (1) Finish migration, then checking.. PASS 230c (4s) debug_raw_pointers=0 debug_raw_pointers=0 SKIP: sanity test_230d skipping SLOW test 230d debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 230e: migrate mulitple local link files === 23:13:40 (1713410020) b PASS 230e (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 230f: migrate mulitple remote link files == 23:13:46 (1713410026) ln1 ln2 PASS 230f (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 230g: migrate dir to non-exist MDT ======== 23:13:52 (1713410032) lfs migrate: /mnt/lustre/d230g.sanity/migrate_dir migrate failed: No such device (19) PASS 230g (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 230h: migrate .. and root ================= 23:13:56 (1713410036) lfs migrate: /mnt/lustre migrate failed: Inappropriate ioctl for device (25) lfs migrate: /mnt/lustre/d230h.sanity/.. migrate failed: Invalid argument (22) lfs migrate: /mnt/lustre/d230h.sanity/migrate_dir/.. migrate failed: Device or resource busy (16) PASS 230h (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 230i: lfs migrate -m tolerates trailing slashes ========================================================== 23:14:01 (1713410041) PASS 230i (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 230j: DoM file data not changed after dir migration ========================================================== 23:14:06 (1713410046) PASS 230j (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 230k: file data not changed after dir migration ========================================================== 23:14:11 (1713410051) SKIP: sanity test_230k needs >= 4 MDTs SKIP 230k (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 230l: readdir between MDTs won't crash ==== 23:14:15 (1713410055) total: 1000 open/close in 4.14 seconds: 241.61 ops/second PASS 230l (44s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 230m: xattrs not changed after dir migration ========================================================== 23:15:01 (1713410101) Creating files and dirs with xattrs striped dir -i0 -c2 -H fnv_1a_64 /mnt/lustre/d230m.sanity striped dir -i0 -c1 -H crush2 /mnt/lustre/d230m.sanity/migrate_dir Migrating to MDT1 Checking xattrs PASS 230m (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 230n: Dir migration with mirrored file ==== 23:15:08 (1713410108) PASS 230n (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 230o: dir split =========================== 23:15:13 (1713410113) lod.lustre-MDT0000-mdtlov.mdt_hash=crush lod.lustre-MDT0001-mdtlov.mdt_hash=crush mdt.lustre-MDT0000.enable_dir_restripe=1 mdt.lustre-MDT0001.enable_dir_restripe=1 total: 100 create in 0.44 seconds: 228.73 ops/second total: 100 mkdir in 1.04 seconds: 96.38 ops/second Waiting 100s for 'crush' Updated after 7s: want 'crush' got 'crush' 99 migrated when dir split 1 to 2 stripes mdt.lustre-MDT0000.enable_dir_restripe=0 mdt.lustre-MDT0001.enable_dir_restripe=0 PASS 230o (14s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 230p: dir merge =========================== 23:15:29 (1713410129) lod.lustre-MDT0000-mdtlov.mdt_hash=crush lod.lustre-MDT0001-mdtlov.mdt_hash=crush mdt.lustre-MDT0000.enable_dir_restripe=1 mdt.lustre-MDT0001.enable_dir_restripe=1 striped dir -i0 -c2 -H crush /mnt/lustre/d230p.sanity total: 100 create in 0.48 seconds: 210.08 ops/second total: 100 mkdir in 0.47 seconds: 211.34 ops/second Waiting 100s for 'crush,fixed' Updated after 9s: want 'crush,fixed' got 'crush,fixed' 99 migrated when dir merge 2 to 1 stripes mdt.lustre-MDT0000.enable_dir_restripe=0 mdt.lustre-MDT0001.enable_dir_restripe=0 PASS 230p (15s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 230q: dir auto split ====================== 23:15:46 (1713410146) mdt.lustre-MDT0000.enable_dir_auto_split=1 mdt.lustre-MDT0001.enable_dir_auto_split=1 mdt.lustre-MDT0000.dir_split_count=100 mdt.lustre-MDT0001.dir_split_count=100 mdt.lustre-MDT0000.dir_split_delta=2 mdt.lustre-MDT0001.dir_split_delta=2 mdt.lustre-MDT0000.dir_restripe_nsonly=0 mdt.lustre-MDT0001.dir_restripe_nsonly=0 lod.lustre-MDT0000-mdtlov.mdt_hash=crush lod.lustre-MDT0001-mdtlov.mdt_hash=crush total: 150 create in 0.29 seconds: 524.83 ops/second Waiting 200s for 'crush' Updated after 8s: want 'crush' got 'crush' 0/150 files on MDT1 after split fixed layout directory won't auto split mdt.lustre-MDT0000.dir_restripe_nsonly=1 mdt.lustre-MDT0001.dir_restripe_nsonly=1 mdt.lustre-MDT0000.dir_split_delta=4 mdt.lustre-MDT0001.dir_split_delta=4 mdt.lustre-MDT0000.dir_split_count=50000 mdt.lustre-MDT0001.dir_split_count=50000 PASS 230q (22s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 230r: migrate with too many local locks === 23:16:11 (1713410171) PASS 230r (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 230s: lfs mkdir should return -EEXIST if target exists ========================================================== 23:16:16 (1713410176) mdt.lustre-MDT0000.enable_dir_restripe=0 mdt.lustre-MDT0001.enable_dir_restripe=0 striped dir -i0 -c2 -H crush /mnt/lustre/d230s.sanity lfs setdirstripe: cannot create dir '/mnt/lustre/d230s.sanity': File exists mdt.lustre-MDT0000.enable_dir_restripe=1 mdt.lustre-MDT0001.enable_dir_restripe=1 striped dir -i0 -c2 -H all_char /mnt/lustre/d230s.sanity lfs setdirstripe: cannot create dir '/mnt/lustre/d230s.sanity': File exists mdt.lustre-MDT0000.enable_dir_restripe=0 mdt.lustre-MDT0001.enable_dir_restripe=0 PASS 230s (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 230t: migrate directory with project ID set ========================================================== 23:16:23 (1713410183) striped dir -i0 -c2 -H crush /mnt/lustre/d230t.sanity striped dir -i0 -c2 -H crush /mnt/lustre/d230t.sanity/subdir striped dir -i0 -c2 -H crush /mnt/lustre/d230t.sanity.2 PASS 230t (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 230u: migrate directory by QOS ============ 23:16:29 (1713410189) SKIP: sanity test_230u needs >= 4 MDTs SKIP 230u (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 230v: subdir migrated to the MDT where its parent is located ========================================================== 23:16:32 (1713410192) SKIP: sanity test_230v needs >= 4 MDTs SKIP 230v (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 230w: non-recursive mode dir migration ==== 23:16:36 (1713410196) total: 10 open/close in 0.11 seconds: 94.27 ops/second total: 10 mkdir in 0.11 seconds: 92.65 ops/second PASS 230w (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 230x: dir migration check space =========== 23:16:42 (1713410202) total: 100 mkdir in 1.03 seconds: 96.89 ops/second lfs migrate: /mnt/lustre/d230x.sanity migrate failed: No space left on device (28) PASS 230x (27s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 230y: unlink dir with bad hash type ======= 23:17:11 (1713410231) striped dir -i0 -c2 -H crush2 /mnt/lustre/d230y.sanity lmv_stripe_count: 2 lmv_stripe_offset: 0 lmv_hash_type: crush2 mdtidx FID[seq:oid:ver] 0 [0x200004280:0xad:0x0] 1 [0x240001b79:0x9c:0x0] total: 100 mkdir in 0.47 seconds: 211.85 ops/second fail_loc=0x1802 lfs migrate: /mnt/lustre/d230y.sanity/d59 migrate failed: File descriptor in bad state (77) fail_loc=0 lmv_stripe_count: 4 lmv_stripe_offset: 1 lmv_hash_type: none,bad_type mdtidx FID[seq:oid:ver] 1 [0x240001b71:0xae:0x0] 0 [0x200002341:0xf7:0x0] 0 [0x200004280:0xad:0x0] 1 [0x240001b79:0x9c:0x0] - unlinked 0 (time 1713410234 ; total 0 ; last 0) total: 100 unlinks in 4 seconds: 25.000000 unlinks/second PASS 230y (8s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 230z: resume dir migration with bad hash type ========================================================== 23:17:22 (1713410242) striped dir -i0 -c1 -H crush /mnt/lustre/d230z.sanity lmv_stripe_count: 0 lmv_stripe_offset: 0 lmv_hash_type: none total: 100 mkdir in 0.48 seconds: 206.93 ops/second fail_loc=0x1802 lfs migrate: /mnt/lustre/d230z.sanity/d10 migrate failed: File descriptor in bad state (77) fail_loc=0 lmv_stripe_count: 3 lmv_stripe_offset: 1 lmv_hash_type: none,bad_type mdtidx FID[seq:oid:ver] 1 [0x240001b71:0xb3:0x0] 0 [0x200002341:0xfc:0x0] 0 [0x200003ab1:0x3638:0x0] lmv_stripe_count: 2 lmv_stripe_offset: 1 lmv_hash_type: fnv_1a_64,fixed mdtidx FID[seq:oid:ver] 1 [0x240001b71:0xb3:0x0] 0 [0x200002341:0xfc:0x0] PASS 230z (17s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 231a: checking that reading/writing of BRW RPC size results in one RPC ========================================================== 23:17:41 (1713410261) vm.dirty_writeback_centisecs = 0 vm.dirty_writeback_centisecs = 0 vm.dirty_ratio = 50 vm.dirty_background_ratio = 25 vm.dirty_writeback_centisecs = 500 vm.dirty_background_ratio = 10 vm.dirty_ratio = 20 PASS 231a (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 231b: must not assert on fully utilized OST request buffer ========================================================== 23:17:48 (1713410268) PASS 231b (10s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 232a: failed lock should not block umount ========================================================== 23:18:01 (1713410281) fail_loc=0x31c dd: failed to open '/mnt/lustre/d232a.sanity/f232a.sanity': Cannot allocate memory fail_loc=0 192.168.201.146@tcp:/lustre /mnt/lustre lustre rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project 0 0 Stopping client oleg146-client.virtnet /mnt/lustre (opts:) Starting client: oleg146-client.virtnet: -o user_xattr,flock oleg146-server@tcp:/lustre /mnt/lustre Stopping /mnt/lustre-ost1 (opts:) on oleg146-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg146-server: oleg146-server.virtnet: executing set_default_debug all all pdsh@oleg146-client: oleg146-server: ssh exited with exit code 1 Started lustre-OST0000 PASS 232a (10s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 232b: failed data version lock should not block umount ========================================================== 23:18:14 (1713410294) 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0491147 s, 21.3 MB/s fail_loc=0x31c lfs data_version: cannot get version for '/mnt/lustre/d232b.sanity/f232b.sanity': Input/output error fail_loc=0 192.168.201.146@tcp:/lustre /mnt/lustre lustre rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project 0 0 Stopping client oleg146-client.virtnet /mnt/lustre (opts:) Starting client: oleg146-client.virtnet: -o user_xattr,flock oleg146-server@tcp:/lustre /mnt/lustre Stopping /mnt/lustre-ost1 (opts:) on oleg146-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg146-server: oleg146-server.virtnet: executing set_default_debug all all pdsh@oleg146-client: oleg146-server: ssh exited with exit code 1 Started lustre-OST0000 PASS 232b (10s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 233a: checking that OBF of the FS root succeeds ========================================================== 23:18:26 (1713410306) PASS 233a (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 233b: checking that OBF of the FS .lustre succeeds ========================================================== 23:18:31 (1713410311) PASS 233b (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 234: xattr cache should not crash on ENOMEM ========================================================== 23:18:36 (1713410316) llite.lustre-ffff88012d090800.xattr_cache=1 fail_loc=0x1405 /mnt/lustre/d234.sanity/f234.sanity: user.attr: Cannot allocate memory fail_loc=0x0 PASS 234 (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 235: LU-1715: flock deadlock detection does not work properly ========================================================== 23:18:41 (1713410321) 2506: taking lock1 [100, 200] 2506: done 2506 sleeping 2 2506: putting lock1 [100, 200] 2506: done 2506 Exit 2505: taking lock0 [0, 100] 2505: done 2505 sleeping 1 2505: taking lock3 [100, 300] 2505: expected deadlock 2505: putting lock0 [0, 100] 2505: done 2505 Exit 2504: sleeping 1 2504: taking lock2 [200, 300] 2504: done 2504: taking lock0 [0, 100] 2504: done 2504: putting lock0 [0, 100] 2504: done 2504: putting lock2 [200, 300] 2504: done 2504 Exit PASS 235 (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 236: Layout swap on open unlinked file ==== 23:18:48 (1713410328) striped dir -i0 -c1 -H crush2 /mnt/lustre/d236.sanity PASS 236 (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 238: Verify linkea consistency ============ 23:18:53 (1713410333) PASS 238 (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 239A: osp_sync test ======================= 23:18:58 (1713410338) - open/close 2389 (time 1713410349.95 total 10.00 last 238.82) - open/close 4749 (time 1713410359.95 total 20.01 last 235.95) total: 5000 open/close in 21.06 seconds: 237.38 ops/second - unlinked 0 (time 1713410362 ; total 0 ; last 0) total: 5000 unlinks in 11 seconds: 454.545441 unlinks/second PASS 239A (39s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 239a: process invalid osp sync record correctly ========================================================== 23:19:39 (1713410379) fail_loc=0x2100 Waiting for MDT destroys to complete PASS 239a (5s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 239b: process osp sync record with ENOMEM error correctly ========================================================== 23:19:46 (1713410386) fail_loc=0x2101 Waiting for MDT destroys to complete fail_loc=0 Waiting for MDT destroys to complete PASS 239b (5s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 240: race between ldlm enqueue and the connection RPC (no ASSERT) ========================================================== 23:19:54 (1713410394) 192.168.201.146@tcp:/lustre /mnt/lustre lustre rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project 0 0 Stopping client oleg146-client.virtnet /mnt/lustre (opts:) fail_loc=0x713 fail_val=1 Starting client: oleg146-client.virtnet: -o user_xattr,flock oleg146-server@tcp:/lustre /mnt/lustre stat /mnt/lustre/d240.sanity/d0/d1, should not fail/ASSERT File: '/mnt/lustre/d240.sanity/d0/d1' Size: 4096 Blocks: 8 IO Block: 1048576 directory Device: 2c54f966h/743766374d Inode: 162129704596280202 Links: 2 Access: (0755/drwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2024-04-17 23:19:55.000000000 -0400 Modify: 2024-04-17 23:19:55.000000000 -0400 Change: 2024-04-17 23:19:55.000000000 -0400 Birth: - PASS 240 (6s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 241a: bio vs dio ========================== 23:20:03 (1713410403) 1+0 records in 1+0 records out 40960 bytes (41 kB) copied, 0.0091602 s, 4.5 MB/s -rw-r--r-- 1 root root 40960 Apr 17 23:20 /mnt/lustre/f241a.sanity ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lock_unused_count=1 PASS 241a (33s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 241b: dio vs dio ========================== 23:20:39 (1713410439) 1+0 records in 1+0 records out 40960 bytes (41 kB) copied, 0.013909 s, 2.9 MB/s -rw-r--r-- 1 root root 40960 Apr 17 23:20 /mnt/lustre/f241b.sanity PASS 241b (15s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 242: mdt_readpage failure should not cause directory unreadable ========================================================== 23:20:57 (1713410457) fail_loc=0x105 /bin/ls: reading directory /mnt/lustre/d242.sanity: Cannot allocate memory fail_loc=0 f242.sanity PASS 242 (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 243: various group lock tests ============= 23:21:03 (1713410463) striped dir -i1 -c2 -H crush2 /mnt/lustre/d243.sanity Starting test test10 at 1713410463 Finishing test test10 at 1713410467 Starting test test11 at 1713410467 Finishing test test11 at 1713410500 Starting test test12 at 1713410500 Finishing test test12 at 1713410501 Starting test test20 at 1713410501 Finishing test test20 at 1713410501 Starting test test30 at 1713410501 Finishing test test30 at 1713410501 Starting test test40 at 1713410501 Finishing test test40 at 1713410501 PASS 243 (40s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 244a: sendfile with group lock tests ====== 23:21:45 (1713410505) striped dir -i0 -c2 -H all_char /mnt/lustre/d244a.sanity 35+0 records in 35+0 records out 36700160 bytes (37 MB) copied, 0.934144 s, 39.3 MB/s Starting test test10 at 1713410507 Finishing test test10 at 1713410512 Starting test test11 at 1713410512 Finishing test test11 at 1713410517 Starting test test12 at 1713410517 Finishing test test12 at 1713410523 Starting test test13 at 1713410523 Finishing test test13 at 1713410529 Starting test test14 at 1713410529 Finishing test test14 at 1713410536 Starting test test15 at 1713410536 Finishing test test15 at 1713410536 Starting test test16 at 1713410536 Finishing test test16 at 1713410537 PASS 244a (34s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 244b: multi-threaded write with group lock ========================================================== 23:22:22 (1713410542) striped dir -i0 -c2 -H all_char /mnt/lustre/d244b.sanity PASS 244b (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 245a: check mdc connection flag/data: multiple modify RPCs ========================================================== 23:22:27 (1713410547) connect_flags: [ write_grant, server_lock, version, acl, xattr, create_on_write, inode_bit_locks, getattr_by_fid, no_oh_for_devices, max_byte_per_rpc, early_lock_cancel, adaptive_timeouts, lru_resize, alt_checksum_algorithm, fid_is_enabled, version_recovery, pools, grant_shrink, large_ea, full20, layout_lock, 64bithash, jobstats, umask, einprogress, grant_param, lvb_type, short_io, flock_deadlock, disp_stripe, open_by_fid, lfsck, multi_mod_rpcs, dir_stripe, subtree, bulk_mbits, second_flags, file_secctx, dir_migrate, sum_statfs, overstriping, flr, lock_convert, archive_id_array, increasing_xid, selinux_policy, lsom, pcc, crush, async_discard, getattr_pfid, lseek, dom_lvb, reply_mbits, batch_rpc, atomic_open_lock, dmv_imp_inherit, unaligned_dio ] mdc.lustre-MDT0000-mdc-ffff8800b588d800.import= import: name: lustre-MDT0000-mdc-ffff8800b588d800 target: lustre-MDT0000_UUID state: FULL connect_flags: [ write_grant, server_lock, version, acl, xattr, create_on_write, inode_bit_locks, getattr_by_fid, no_oh_for_devices, max_byte_per_rpc, early_lock_cancel, adaptive_timeouts, lru_resize, alt_checksum_algorithm, fid_is_enabled, version_recovery, pools, grant_shrink, large_ea, full20, layout_lock, 64bithash, jobstats, umask, einprogress, grant_param, lvb_type, short_io, flock_deadlock, disp_stripe, open_by_fid, lfsck, multi_mod_rpcs, dir_stripe, subtree, bulk_mbits, second_flags, file_secctx, dir_migrate, sum_statfs, overstriping, flr, lock_convert, archive_id_array, increasing_xid, selinux_policy, lsom, pcc, crush, async_discard, getattr_pfid, lseek, dom_lvb, reply_mbits, batch_rpc, atomic_open_lock, dmv_imp_inherit, unaligned_dio ] connect_data: flags: 0xae7a5e7be344d3b8 instance: 9 target_version: 2.15.62.23 initial_grant: 2146304 max_brw_size: 1048576 ibits_known: 0x7f grant_block_size: 4096 grant_inode_size: 32 grant_max_extent_size: 67108864 grant_extent_tax: 24576 cksum_types: 0xf7 max_easize: 65536 max_mod_rpcs: 8 import_flags: [ replayable, pingable, connect_tried ] connection: failover_nids: [ "192.168.201.146@tcp" ] nids_stats: "192.168.201.146@tcp": { connects: 1, replied: 1, uptodate: true, sec_ago: 152 } current_connection: "192.168.201.146@tcp" connection_attempts: 1 generation: 1 in-progress_invalidations: 0 idle: 5 sec rpcs: inflight: 0 unregistering: 0 timeouts: 0 avg_waittime: 3059 usecs service_estimates: services: 5 sec network: 5 sec transactions: last_replay: 0 peer_committed: 38654714775 last_checked: 38654714775 PASS 245a (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 245b: check osp connection flag/data: multiple modify RPCs ========================================================== 23:22:31 (1713410551) connect_flags: [ version, acl, inode_bit_locks, adaptive_timeouts, mds_mds_connection, fid_is_enabled, full20, lfsck, multi_mod_rpcs, bulk_mbits, second_flags ] PASS 245b (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 247a: mount subdir as fileset ============= 23:22:37 (1713410557) Starting client: oleg146-client.virtnet: -o user_xattr,flock oleg146-server@tcp:/lustre/d247a.sanity /mnt/lustre_d247a.sanity 192.168.201.146@tcp:/lustre/d247a.sanity /mnt/lustre_d247a.sanity lustre rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project 0 0 Stopping client oleg146-client.virtnet /mnt/lustre_d247a.sanity (opts:) PASS 247a (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 247b: mount subdir that dose not exist ==== 23:22:42 (1713410562) Starting client: oleg146-client.virtnet: -o user_xattr,flock oleg146-server@tcp:/lustre/d247b.sanity /mnt/lustre_d247b.sanity mount.lustre: mount oleg146-server@tcp:/lustre/d247b.sanity at /mnt/lustre_d247b.sanity failed: No such file or directory Is the MGS specification correct? Is the filesystem name correct? If upgrading, is the copied client log valid? (see upgrade docs) PASS 247b (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 247c: running fid2path outside subdirectory root ========================================================== 23:22:47 (1713410567) Starting client: oleg146-client.virtnet: -o user_xattr,flock oleg146-server@tcp:/lustre/d247c.sanity /mnt/lustre_d247c.sanity lfs fid2path: cannot find /mnt/lustre_d247c.sanity [0x200000007:0x1:0x0]: No such file or directory 192.168.201.146@tcp:/lustre/d247c.sanity /mnt/lustre_d247c.sanity lustre rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project 0 0 Stopping client oleg146-client.virtnet /mnt/lustre_d247c.sanity (opts:) PASS 247c (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 247d: running fid2path inside subdirectory root ========================================================== 23:22:53 (1713410573) Starting client: oleg146-client.virtnet: -o user_xattr,flock oleg146-server@tcp:/lustre/d247d.sanity /mnt/lustre_d247d.sanity /mnt/lustre_d247d.sanity [0x200004283:0x1c:0x0] /mnt/lustre_d247d.sanity/// [0x200004283:0x1c:0x0] /mnt/lustre_d247d.sanity/dir1 [0x200004283:0x1c:0x0] lfs fid2path: cannot resolve mount point for '/mnt/lustre_d247d.sanity_wrong': No such device 192.168.201.146@tcp:/lustre/d247d.sanity /mnt/lustre_d247d.sanity lustre rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project 0 0 Stopping client oleg146-client.virtnet /mnt/lustre_d247d.sanity (opts:) PASS 247d (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 247e: mount .. as fileset ================= 23:22:59 (1713410579) Starting client: oleg146-client.virtnet: -o user_xattr,flock oleg146-server@tcp:/lustre/.. /mnt/lustre_d247e.sanity mount.lustre: mount oleg146-server@tcp:/lustre/.. at /mnt/lustre_d247e.sanity failed: Invalid argument This may have multiple causes. Is 'lustre/..' the correct filesystem name? Are the mount options correct? Check the syslog for more info. PASS 247e (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 247f: mount striped or remote directory as fileset ========================================================== 23:23:04 (1713410584) mdt.lustre-MDT0000.enable_remote_subdir_mount=1 mdt.lustre-MDT0001.enable_remote_subdir_mount=1 Starting client: oleg146-client.virtnet: -o user_xattr,flock oleg146-server@tcp:/lustre/d247f.sanity/remote /mnt/lustre_d247f.sanity 192.168.201.146@tcp:/lustre/d247f.sanity/remote /mnt/lustre_d247f.sanity lustre rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project 0 0 Stopping client oleg146-client.virtnet /mnt/lustre_d247f.sanity (opts:) Starting client: oleg146-client.virtnet: -o user_xattr,flock oleg146-server@tcp:/lustre/d247f.sanity/remote/subdir /mnt/lustre_d247f.sanity 192.168.201.146@tcp:/lustre/d247f.sanity/remote/subdir /mnt/lustre_d247f.sanity lustre rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project 0 0 Stopping client oleg146-client.virtnet /mnt/lustre_d247f.sanity (opts:) Starting client: oleg146-client.virtnet: -o user_xattr,flock oleg146-server@tcp:/lustre/d247f.sanity/striped /mnt/lustre_d247f.sanity 192.168.201.146@tcp:/lustre/d247f.sanity/striped /mnt/lustre_d247f.sanity lustre rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project 0 0 Stopping client oleg146-client.virtnet /mnt/lustre_d247f.sanity (opts:) Starting client: oleg146-client.virtnet: -o user_xattr,flock oleg146-server@tcp:/lustre/d247f.sanity/striped/subdir /mnt/lustre_d247f.sanity 192.168.201.146@tcp:/lustre/d247f.sanity/striped/subdir /mnt/lustre_d247f.sanity lustre rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project 0 0 Stopping client oleg146-client.virtnet /mnt/lustre_d247f.sanity (opts:) Starting client: oleg146-client.virtnet: -o user_xattr,flock oleg146-server@tcp:/lustre/d247f.sanity/striped/. /mnt/lustre_d247f.sanity 192.168.201.146@tcp:/lustre/d247f.sanity/striped/. /mnt/lustre_d247f.sanity lustre rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project 0 0 Stopping client oleg146-client.virtnet /mnt/lustre_d247f.sanity (opts:) PASS 247f (6s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 247g: striped directory submount revalidate ROOT from cache ========================================================== 23:23:13 (1713410593) Starting client: oleg146-client.virtnet: -o user_xattr,flock oleg146-server@tcp:/lustre/d247g.sanity /mnt/lustre_d247g.sanity PASS 247g (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 247h: remote directory submount revalidate ROOT from cache ========================================================== 23:23:19 (1713410599) Starting client: oleg146-client.virtnet: -o user_xattr,flock oleg146-server@tcp:/lustre/d247h.sanity /mnt/lustre_d247h.sanity Starting client: oleg146-client.virtnet: -o user_xattr,flock oleg146-server@tcp:/lustre/d247h.sanity/d247h.sanity.0/d247h.sanity.1 /mnt/lustre_d247h.sanity.1 PASS 247h (6s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 248a: fast read verification ============== 23:23:27 (1713410607) /mnt/lustre/f248a.sanity has size 134217728 OK Test 1: verify that fast read is 4 times faster on cache read Test 2: verify the performance between big and small read PASS 248a (35s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 248b: test short_io read and write for both small and large sizes ========================================================== 23:24:05 (1713410645) bs=53248 count=113 normal buffered write 113+0 records in 113+0 records out 6017024 bytes (6.0 MB) copied, 0.0729052 s, 82.5 MB/s bs=47008 count=128 oflag=dsync normal write f248b.sanity.0 128+0 records in 128+0 records out 6017024 bytes (6.0 MB) copied, 2.04871 s, 2.9 MB/s bs=11752 count=512 oflag=dsync small write f248b.sanity.1 512+0 records in 512+0 records out 6017024 bytes (6.0 MB) copied, 5.56104 s, 1.1 MB/s bs=4096 count=1469 iflag=direct small read f248b.sanity.1 1469+0 records in 1469+0 records out 6017024 bytes (6.0 MB) copied, 6.2008 s, 970 kB/s test invalid parameter 2MB error: set_param: setting /sys/fs/lustre/osc/lustre-OST0000-osc-ffff8800b588d800/short_io_bytes=2M: Numerical result out of range error: set_param: setting 'osc/lustre-OST0000*/short_io_bytes'='2M': Numerical result out of range test maximum parameter 512KB osc.lustre-OST0000-osc-ffff8800b588d800.short_io_bytes=512K osc.lustre-OST0000-osc-ffff8800b588d800.short_io_bytes=262144 test large parameter 64KB osc.lustre-OST0000-osc-ffff8800b588d800.short_io_bytes=65536 osc.lustre-OST0001-osc-ffff8800b588d800.short_io_bytes=65536 osc.lustre-OST0000-osc-ffff8800b588d800.short_io_bytes=65536 bs=47008 count=128 oflag=dsync large write f248b.sanity.2 128+0 records in 128+0 records out 6017024 bytes (6.0 MB) copied, 1.86683 s, 3.2 MB/s bs=53248 count=113 oflag=direct large write f248b.sanity.3 113+0 records in 113+0 records out 6017024 bytes (6.0 MB) copied, 0.795623 s, 7.6 MB/s bs=53248 count=113 iflag=direct large read f248b.sanity.2 113+0 records in 113+0 records out 6017024 bytes (6.0 MB) copied, 0.671137 s, 9.0 MB/s bs=53248 count=113 iflag=direct large read f248b.sanity.3 113+0 records in 113+0 records out 6017024 bytes (6.0 MB) copied, 0.637062 s, 9.4 MB/s osc.lustre-OST0000-osc-ffff8800b588d800.short_io_bytes=16384 osc.lustre-OST0001-osc-ffff8800b588d800.short_io_bytes=16384 PASS 248b (21s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 248c: verify whole file read behavior ===== 23:24:28 (1713410668) llite.lustre-ffff8800b588d800.read_ahead_stats=c llite.lustre-ffff8800b588d800.max_read_ahead_whole_mb=64 whole file readahead of 64 MiB took 37.8 seconds llite.lustre-ffff8800b588d800.read_ahead_stats= snapshot_time 1713410671.181633706 secs.nsecs start_time 1713410669.307667872 secs.nsecs elapsed_time 1.873965834 secs.nsecs hits 16382 samples [pages] misses 2 samples [pages] zero_size_window 1 samples [pages] failed_to_fast_read 3 samples [pages] readahead_pages 1 samples [pages] 16382 16382 16382 llite.lustre-ffff8800b588d800.read_ahead_stats=c llite.lustre-ffff8800b588d800.max_read_ahead_whole_mb=8 non-whole file readahead of 64 MiB took 37.2 seconds llite.lustre-ffff8800b588d800.read_ahead_stats= snapshot_time 1713410673.393382750 secs.nsecs start_time 1713410671.186549964 secs.nsecs elapsed_time 2.206832786 secs.nsecs hits 16382 samples [pages] misses 2 samples [pages] zero_size_window 1 samples [pages] failed_to_fast_read 3 samples [pages] readahead_pages 1 samples [pages] 16382 16382 16382 Test passed on attempt 1 llite.lustre-ffff8800b588d800.max_read_ahead_whole_mb=64 PASS 248c (7s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 249: Write above 2T file size ============= 23:24:37 (1713410677) 1+0 records in 1+0 records out 4096 bytes (4.1 kB) copied, 0.0046892 s, 873 kB/s PASS 249 (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 250: Write above 16T limit ================ 23:24:41 (1713410681) lfs: getstripe for '/mnt/lustre/f250.sanity' failed: No such file or directory dd: error writing '/mnt/lustre/f250.sanity': File too large 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.00406703 s, 0.0 kB/s PASS 250 (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 251a: Handling short read and write correctly ========================================================== 23:24:46 (1713410686) fail_loc=0xa0001407 fail_val=1 fail_loc=0xa0001407 fail_val=1 PASS 251a (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 251b: short read restore offset correctly ========================================================== 23:24:51 (1713410691) 4+0 records in 4+0 records out 4096 bytes (4.1 kB) copied, 0.00850848 s, 481 kB/s fail_loc=0x1431 fail_val=5 PASS 251b (8s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 252: check lr_reader tool ================= 23:25:01 (1713410701) last_rcvd: uuid: lustre-OST0000_UUID feature_compat: 0x2 feature_incompat: 0xa feature_rocompat: 0x2 last_transaction: 51539610194 target_index: 0 mount_count: 12 uuid returned by /home/green/git/lustre-release/lustre/utils/lr_reader is 'lustre-OST0000_UUID' pdsh@oleg146-client: oleg146-server: ssh exited with exit code 255 last_rcvd: uuid: lustre-MDT0000_UUID feature_compat: 0x8 feature_incompat: 0x61c feature_rocompat: 0x1 last_transaction: 38654714810 target_index: 0 mount_count: 9 client_area_start: 8192 client_area_size: 128 lustre-MDT0001-mdtlov_UUID: generation: 4 last_transaction: 34359774019 last_xid: 0 last_result: 0 last_data: 0 3483a748-c49b-40d1-b2a7-007d13586d2e: generation: 10 last_transaction: 0 last_xid: 0 last_result: 0 last_data: 0 Number of mdtlov clients returned by /home/green/git/lustre-release/lustre/utils/lr_reader is '1' last_rcvd: uuid: lustre-MDT0000_UUID feature_compat: 0x8 feature_incompat: 0x61c feature_rocompat: 0x1 last_transaction: 38654714810 target_index: 0 mount_count: 9 client_area_start: 8192 client_area_size: 128 lustre-MDT0001-mdtlov_UUID: generation: 4 last_transaction: 34359774019 last_xid: 0 last_result: 0 last_data: 0 3483a748-c49b-40d1-b2a7-007d13586d2e: generation: 10 last_transaction: 0 last_xid: 0 last_result: 0 last_data: 0 reply_data: 0: client_generation: 10 last_transaction: 38654714886 last_xid: 1796636706398080 last_result: 0 last_data: 3 1: client_generation: 10 last_transaction: 38654714887 last_xid: 1796636706398336 last_result: 0 last_data: 0 2: client_generation: 10 last_transaction: 38654714888 last_xid: 1796636706398336 last_result: 0 last_data: 0 3: client_generation: 4 last_transaction: 38654714813 last_xid: 1796636650754624 last_result: 0 last_data: 0 4: client_generation: 10 last_transaction: 38654714815 last_xid: 1796636706037120 last_result: 0 last_data: 0 5: client_generation: 7 last_transaction: 38654711268 last_xid: 1796636700964544 last_result: 0 last_data: 0 6: client_generation: 7 last_transaction: 25769895686 last_xid: 1796636680742848 last_result: 0 last_data: 0 7: client_generation: 7 last_transaction: 25769895398 last_xid: 1796636680687808 last_result: 0 last_data: 0 8: client_generation: 7 last_transaction: 25769895378 last_xid: 1796636680683968 last_result: 0 last_data: 0 9: client_generation: 7 last_transaction: 25769895361 last_xid: 1796636680680704 last_result: 0 last_data: 0 10: client_generation: 7 last_transaction: 25769894363 last_xid: 1796636680489408 last_result: 0 last_data: 0 Number of reply data returned by /home/green/git/lustre-release/lustre/utils/lr_reader is '11' PASS 252 (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 253: Check object allocation limit ======== 23:25:08 (1713410708) keep default fallocate mode: 0 7 Waiting for orphan cleanup... osp.lustre-OST0000-osc-MDT0000.old_sync_processed osp.lustre-OST0000-osc-MDT0001.old_sync_processed osp.lustre-OST0001-osc-MDT0000.old_sync_processed osp.lustre-OST0001-osc-MDT0001.old_sync_processed wait 40 secs maximumly for oleg146-server mds-ost sync done. Waiting for MDT destroys to complete Creating new pool oleg146-server: Pool lustre.test_253 created Adding targets to pool oleg146-server: OST lustre-OST0000_UUID added to pool lustre.test_253 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 0.256257 s, 40.9 MB/s prealloc_status -28 dd: failed to open '/mnt/lustre/d253.sanity/f253.sanity.1': No space left on device 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.131612 s, 39.8 MB/s Waiting for MDT destroys to complete prealloc_status 0 Destroy the created pools: test_253 lustre.test_253 oleg146-server: OST lustre-OST0000_UUID removed from pool lustre.test_253 oleg146-server: Pool lustre.test_253 destroyed PASS 253 (70s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 254: Check changelog size ================= 23:26:21 (1713410781) 17344 mdd.lustre-MDT0000.changelog_mask=+hsm mdd.lustre-MDT0001.changelog_mask=+hsm Registered 2 changelog users: 'cl29 cl1' lustre-MDT0000: clear the changelog for cl1 of all records lustre-MDT0001: clear the changelog for cl29 of all records Changelog size 25680 Changelog size after work 36112 lustre-MDT0001: clear the changelog for cl29 of all records lustre-MDT0001: Deregistered changelog user #29 lustre-MDT0000: clear the changelog for cl1 of all records lustre-MDT0000: Deregistered changelog user #1 PASS 254 (8s) debug_raw_pointers=0 debug_raw_pointers=0 SKIP: sanity test_255a skipping excluded test 255a (base 255) SKIP: sanity test_255b skipping excluded test 255b (base 255) SKIP: sanity test_255c skipping excluded test 255c (base 255) SKIP: sanity test_256 skipping excluded test 256 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 257: xattr locks are not lost ============= 23:26:34 (1713410794) striped dir -i1 -c2 -H fnv_1a_64 /mnt/lustre/d257.sanity File: '/mnt/lustre/d257.sanity' Size: 8192 Blocks: 16 IO Block: 1048576 directory Device: 2c54f966h/743766374d Inode: 162129704613052433 Links: 2 Access: (0755/drwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2024-04-17 23:26:35.000000000 -0400 Modify: 2024-04-17 23:26:35.000000000 -0400 Change: 2024-04-17 23:26:35.000000000 -0400 Birth: - fail_val=0 fail_loc=0x80000161 Stopping /mnt/lustre-mds2 (opts:) on oleg146-server Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg146-server: oleg146-server.virtnet: executing set_default_debug all all pdsh@oleg146-client: oleg146-server: ssh exited with exit code 1 Started lustre-MDT0001 affected facets: mds2 oleg146-server: oleg146-server.virtnet: executing _wait_recovery_complete *.lustre-MDT0001.recovery_status 1475 oleg146-server: *.lustre-MDT0001.recovery_status status: RECOVERING oleg146-server: Waiting 1470 secs for *.lustre-MDT0001.recovery_status recovery done. status: RECOVERING oleg146-server: *.lustre-MDT0001.recovery_status status: COMPLETE PASS 257 (21s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 258a: verify i_mutex security behavior when suid attributes is set ========================================================== 23:26:57 (1713410817) fail_loc=0x141c running as uid/gid/euid/egid 500/500/500/500, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/f258a.sanity] [bs=4k] [count=1] [oflag=append] 1+0 records in 1+0 records out 4096 bytes (4.1 kB) copied, 0.0113864 s, 360 kB/s PASS 258a (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 258b: verify i_mutex security behavior ==== 23:27:02 (1713410822) fail_loc=0x141d running as uid/gid/euid/egid 500/500/500/500, groups: [dd] [if=/dev/zero] [of=/mnt/lustre/f258b.sanity] [bs=4k] [count=1] [oflag=append] 1+0 records in 1+0 records out 4096 bytes (4.1 kB) copied, 0.00501218 s, 817 kB/s PASS 258b (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 259: crash at delayed truncate ============ 23:27:07 (1713410827) Waiting for MDT destroys to complete before: 3814308 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 0.278595 s, 37.6 MB/s after write: 3806116 fail_loc=0x2301 after truncate: 3806116 Stopping /mnt/lustre-ost1 (opts:) on oleg146-server fail_loc=0 Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg146-server: oleg146-server.virtnet: executing set_default_debug all all pdsh@oleg146-client: oleg146-server: ssh exited with exit code 1 Started lustre-OST0000 after restart: 3814304 PASS 259 (19s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 260: Check mdc_close fail ================= 23:27:28 (1713410848) fail_loc=0x80000806 PASS 260 (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 270a: DoM: basic functionality tests ====== 23:27:33 (1713410853) 192+0 records in 192+0 records out 196608 bytes (197 kB) copied, 0.0665124 s, 3.0 MB/s 3+0 records in 3+0 records out 196608 bytes (197 kB) copied, 0.0369984 s, 5.3 MB/s 1984+0 records in 1984+0 records out 2031616 bytes (2.0 MB) copied, 0.335663 s, 6.1 MB/s 31+0 records in 31+0 records out 2031616 bytes (2.0 MB) copied, 0.299152 s, 6.8 MB/s PASS 270a (5s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 270b: DoM: maximum size overflow checks for DoM-only file ========================================================== 23:27:41 (1713410861) truncate: cannot truncate '/mnt/lustre/d270b.sanity/dom_file' to length 1048577: File too large dd: error writing '/mnt/lustre/d270b.sanity/dom_file': No data available 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.00267851 s, 0.0 kB/s 1+0 records in 1+0 records out 1048573 bytes (1.0 MB) copied, 0.0453863 s, 23.1 MB/s /home/green/git/lustre-release/lustre/tests/sanity.sh: line 24838: echo: write error: File too large PASS 270b (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 270c: DoM: DoM EA inheritance tests ======= 23:27:46 (1713410866) PASS 270c (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 270d: DoM: change striping from DoM to RAID0 ========================================================== 23:27:51 (1713410871) PASS 270d (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 270e: DoM: lfs find with DoM files test === 23:27:56 (1713410876) total: 20 open/close in 0.19 seconds: 106.92 ops/second total: 10 open/close in 0.10 seconds: 101.69 ops/second Test 1: lfs find 20 DOM files by layout: OK Test 2: lfs find 1 DOM dir by layout: OK Test 4: lfs find 20 DOM files by stripe size: OK Test 5: lfs find no DOM files by stripe index: OK PASS 270e (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 270f: DoM: maximum DoM stripe size checks ========================================================== 23:28:02 (1713410882) lfs setstripe: cannot create composite file '/mnt/lustre/d270f.sanity/dom_file': Invalid argument oleg146-server: error: set_param: setting /sys/fs/lustre/lod/lustre-MDT0000-mdtlov/dom_stripesize=2147483648: Numerical result out of range oleg146-server: error: set_param: setting 'lod/lustre-MDT0000-mdtlov/dom_stripesize'='2147483648': Numerical result out of range pdsh@oleg146-client: oleg146-server: ssh exited with exit code 34 65536 PASS 270f (7s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 270g: DoM: default DoM stripe size depends on free space ========================================================== 23:28:11 (1713410891) DOM threshold is 50% free space Free space: 40%, default DOM stripe: 512K Free space: 20%, default DOM stripe: 256K Free space: 0%, default DOM stripe: 0K Free space: 15%, default DOM stripe: 256K Free space: 30%, default DOM stripe: 512K Free space: 55%, default DOM stripe: 1024K PASS 270g (10s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 270h: DoM: DoM stripe removal when disabled on server ========================================================== 23:28:24 (1713410904) PASS 270h (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 270i: DoM: setting invalid DoM striping should fail ========================================================== 23:28:30 (1713410910) lfs setstripe: Invalid pattern: '-L mdt', must be specified with -E: Invalid argument (22) lfs setstripe: Invalid pattern: '-L mdt', must be specified with -E: Invalid argument (22) Option 'stripe-count' can't be specified with Data-on-MDT component: 1152921504606846979 lfs setstripe: invalid layout Create a file with specified striping/composite layout, or set the default layout on an existing directory. Usage: setstripe [--component-add|--component-del|--delete|-d] [--comp-set --comp-id|-I COMP_ID|--comp-flags=COMP_FLAGS] [--component-end|-E END_OFFSET] [--copy=SOURCE_LAYOUT_FILE]|--yaml|-y YAML_TEMPLATE_FILE] [--extension-size|--ext-size|-z EXT_SIZE] [--help|-h] [--foreign=FOREIGN_TYPE --xattr|-x LAYOUT] [--layout|-L PATTERN] [--mode FILE_MODE] [--mirror-count|-N[MIRROR_COUNT]] [--ost|-o OST_INDEX[,OST_INDEX,...]] [--overstripe-count|-C STRIPE_COUNT] [--pool|-p POOL_NAME] [--stripe-count|-c STRIPE_COUNT] [--stripe-index|-i START_OST_IDX] [--stripe-size|-S STRIPE_SIZE] FILENAME|DIRECTORY Option 'stripe-count' can't be specified with Data-on-MDT component: 1152921504606846979 lfs setstripe: invalid layout Create a file with specified striping/composite layout, or set the default layout on an existing directory. Usage: setstripe [--component-add|--component-del|--delete|-d] [--comp-set --comp-id|-I COMP_ID|--comp-flags=COMP_FLAGS] [--component-end|-E END_OFFSET] [--copy=SOURCE_LAYOUT_FILE]|--yaml|-y YAML_TEMPLATE_FILE] [--extension-size|--ext-size|-z EXT_SIZE] [--help|-h] [--foreign=FOREIGN_TYPE --xattr|-x LAYOUT] [--layout|-L PATTERN] [--mode FILE_MODE] [--mirror-count|-N[MIRROR_COUNT]] [--ost|-o OST_INDEX[,OST_INDEX,...]] [--overstripe-count|-C STRIPE_COUNT] [--pool|-p POOL_NAME] [--stripe-count|-c STRIPE_COUNT] [--stripe-index|-i START_OST_IDX] [--stripe-size|-S STRIPE_SIZE] FILENAME|DIRECTORY PASS 270i (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 270j: DoM migration: DOM file to the OST-striped file (plain) ========================================================== 23:28:35 (1713410915) 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0595764 s, 17.6 MB/s PASS 270j (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 271a: DoM: data is cached for read after write ========================================================== 23:28:40 (1713410920) 1+0 records in 1+0 records out 4096 bytes (4.1 kB) copied, 0.00473245 s, 866 kB/s /mnt/lustre/d271a.sanity/dom PASS 271a (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 271b: DoM: no glimpse RPC for stat (DoM only file) ========================================================== 23:28:45 (1713410925) 1+0 records in 1+0 records out 4096 bytes (4.1 kB) copied, 0.00471403 s, 869 kB/s /mnt/lustre/d271b.sanity/dom has type file OK /mnt/lustre/d271b.sanity/dom has size 4096 OK /mnt/lustre/d271b.sanity/dom has type file OK /mnt/lustre/d271b.sanity/dom has size 4096 OK PASS 271b (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 271ba: DoM: no glimpse RPC for stat (combined file) ========================================================== 23:28:50 (1713410930) 1+0 records in 1+0 records out 2097152 bytes (2.1 MB) copied, 0.115968 s, 18.1 MB/s /mnt/lustre/d271ba.sanity/dom has type file OK /mnt/lustre/d271ba.sanity/dom has size 2097152 OK /mnt/lustre/d271ba.sanity/dom has type file OK /mnt/lustre/d271ba.sanity/dom has size 2097152 OK PASS 271ba (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 271c: DoM: IO lock at open saves enqueue RPCs ========================================================== 23:28:55 (1713410935) total: 1000 open/close in 4.25 seconds: 235.06 ops/second total: write 2043438 bytes in 15 seconds: 136229.20 bytes/second snapshot_time 1713410957.033082993 secs.nsecs start_time 1713410942.169722722 secs.nsecs elapsed_time 14.863360271 secs.nsecs req_waittime 5018 samples [usecs] 906 6427 12708001 32826670905 req_active 5018 samples [reqs] 1 9 6100 8642 ldlm_ibits_enqueue 3000 samples [reqs] 1 1 3000 3000 write_bytes 18 samples [bytes] 49 4077 37900 109133226 ost_write 18 samples [usecs] 2848 6427 79470 372370186 mds_close 1000 samples [usecs] 1299 3372 2623723 6910167077 ldlm_cancel 1000 samples [usecs] 906 2732 2117485 4522736575 - unlinked 0 (time 1713410957 ; total 0 ; last 0) total: 1000 unlinks in 3 seconds: 333.333344 unlinks/second total: 1000 open/close in 4.09 seconds: 244.48 ops/second total: write 2043438 bytes in 12 seconds: 170286.50 bytes/second - unlinked 0 (time 1713410980 ; total 0 ; last 0) total: 1000 unlinks in 5 seconds: 200.000000 unlinks/second PASS 271c (52s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 271d: DoM: read on open (1K file in reply buffer) ========================================================== 23:29:50 (1713410990) 1+0 records in 1+0 records out 1000 bytes (1.0 kB) copied, 0.000350497 s, 2.9 MB/s 1+0 records in 1+0 records out 1000 bytes (1.0 kB) copied, 0.00442162 s, 226 kB/s Append to the same page ... DONE Open and read file ... DONE PASS 271d (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 271f: DoM: read on open (200K file and read tail) ========================================================== 23:29:55 (1713410995) 1+0 records in 1+0 records out 265000 bytes (265 kB) copied, 0.00493498 s, 53.7 MB/s 1+0 records in 1+0 records out 265000 bytes (265 kB) copied, 0.0161756 s, 16.4 MB/s Append to the same page ... DONE Open and read file ... DONE PASS 271f (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 271g: Discard DoM data vs client flush race ========================================================== 23:30:00 (1713411000) /mnt/lustre/f271g.sanity has type file OK fail_loc=0x80000314 PASS 271g (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 272a: DoM migration: new layout with the same DOM component ========================================================== 23:30:06 (1713411006) 1+0 records in 1+0 records out 524288 bytes (524 kB) copied, 0.038838 s, 13.5 MB/s PASS 272a (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 272b: DoM migration: DOM file to the OST-striped file (plain) ========================================================== 23:30:12 (1713411012) 1+0 records in 1+0 records out 2097152 bytes (2.1 MB) copied, 0.130173 s, 16.1 MB/s PASS 272b (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 272c: DoM migration: DOM file to the OST-striped file (composite) ========================================================== 23:30:18 (1713411018) 1+0 records in 1+0 records out 2097152 bytes (2.1 MB) copied, 0.0932527 s, 22.5 MB/s PASS 272c (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 272d: DoM mirroring: OST-striped mirror to DOM file ========================================================== 23:30:25 (1713411025) 1+0 records in 1+0 records out 2097152 bytes (2.1 MB) copied, 0.0947902 s, 22.1 MB/s lfs mirror mirror: cannot get UNLOCK lease, ext 8: Invalid argument (22) PASS 272d (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 272e: DoM mirroring: DOM mirror to the OST-striped file ========================================================== 23:30:31 (1713411031) 1+0 records in 1+0 records out 2097152 bytes (2.1 MB) copied, 0.103044 s, 20.4 MB/s lfs mirror mirror: cannot get UNLOCK lease, ext 8: Invalid argument (22) PASS 272e (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 272f: DoM migration: OST-striped file to DOM file ========================================================== 23:30:38 (1713411038) 1+0 records in 1+0 records out 2097152 bytes (2.1 MB) copied, 0.105183 s, 19.9 MB/s lfs migrate: cannot get UNLOCK lease, ext 8: Invalid argument (22) /mnt/lustre/d272f.sanity/f272f.sanity /mnt/lustre/d272f.sanity/f272f.sanity PASS 272f (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 273a: DoM: layout swapping should fail with DOM ========================================================== 23:30:44 (1713411044) lfs swap_layouts: error: cannot swap layout between '/mnt/lustre/d273a.sanity/f273a.sanity_plain' and '/mnt/lustre/d273a.sanity/f273a.sanity_dom': Operation not supported (95) lfs swap_layouts: error: cannot swap layout between '/mnt/lustre/d273a.sanity/f273a.sanity_dom' and '/mnt/lustre/d273a.sanity/f273a.sanity_plain': Operation not supported (95) lfs swap_layouts: error: cannot swap layout between '/mnt/lustre/d273a.sanity/f273a.sanity_comp' and '/mnt/lustre/d273a.sanity/f273a.sanity_dom': Operation not supported (95) lfs swap_layouts: error: cannot swap layout between '/mnt/lustre/d273a.sanity/f273a.sanity_dom' and '/mnt/lustre/d273a.sanity/f273a.sanity_comp': Operation not supported (95) PASS 273a (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 273b: DoM: race writeback and object destroy ========================================================== 23:30:49 (1713411049) fail_loc=0x8000016b fail_val=2 PASS 273b (5s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 273c: race writeback and object destroy === 23:30:57 (1713411057) fail_loc=0x800001e1 fail_val=2 PASS 273c (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 275: Read on a canceled duplicate lock ==== 23:31:03 (1713411063) 2+0 records in 2+0 records out 2097152 bytes (2.1 MB) copied, 0.0932919 s, 22.5 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0415013 s, 25.3 MB/s fail_loc=0x8000031f fail_loc=0x8000032b 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0381732 s, 27.5 MB/s PASS 275 (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 276: Race between mount and obd_statfs ==== 23:31:09 (1713411069) Stopping /mnt/lustre-ost1 (opts:) on oleg146-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg146-server: oleg146-server.virtnet: executing set_default_debug all all pdsh@oleg146-client: oleg146-server: ssh exited with exit code 1 Started lustre-OST0000 Stopping /mnt/lustre-ost1 (opts:) on oleg146-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg146-server: oleg146-server.virtnet: executing set_default_debug all all pdsh@oleg146-client: oleg146-server: ssh exited with exit code 1 Started lustre-OST0000 Stopping /mnt/lustre-ost1 (opts:) on oleg146-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg146-server: oleg146-server.virtnet: executing set_default_debug all all pdsh@oleg146-client: oleg146-server: ssh exited with exit code 1 Started lustre-OST0000 Stopping /mnt/lustre-ost1 (opts:) on oleg146-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg146-server: oleg146-server.virtnet: executing set_default_debug all all pdsh@oleg146-client: oleg146-server: ssh exited with exit code 1 Started lustre-OST0000 Stopping /mnt/lustre-ost1 (opts:) on oleg146-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg146-server: oleg146-server.virtnet: executing set_default_debug all all pdsh@oleg146-client: oleg146-server: ssh exited with exit code 1 Started lustre-OST0000 Stopping /mnt/lustre-ost1 (opts:) on oleg146-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg146-server: oleg146-server.virtnet: executing set_default_debug all all pdsh@oleg146-client: oleg146-server: ssh exited with exit code 1 Started lustre-OST0000 Stopping /mnt/lustre-ost1 (opts:) on oleg146-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg146-server: oleg146-server.virtnet: executing set_default_debug all all pdsh@oleg146-client: oleg146-server: ssh exited with exit code 1 Started lustre-OST0000 Stopping /mnt/lustre-ost1 (opts:) on oleg146-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg146-server: oleg146-server.virtnet: executing set_default_debug all all pdsh@oleg146-client: oleg146-server: ssh exited with exit code 1 Started lustre-OST0000 Stopping /mnt/lustre-ost1 (opts:) on oleg146-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg146-server: oleg146-server.virtnet: executing set_default_debug all all pdsh@oleg146-client: oleg146-server: ssh exited with exit code 1 Started lustre-OST0000 Stopping /mnt/lustre-ost1 (opts:) on oleg146-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg146-server: oleg146-server.virtnet: executing set_default_debug all all pdsh@oleg146-client: oleg146-server: ssh exited with exit code 1 Started lustre-OST0000 Stopping /mnt/lustre-ost1 (opts:) on oleg146-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg146-server: oleg146-server.virtnet: executing set_default_debug all all pdsh@oleg146-client: oleg146-server: ssh exited with exit code 1 Started lustre-OST0000 Stopping /mnt/lustre-ost1 (opts:) on oleg146-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg146-server: oleg146-server.virtnet: executing set_default_debug all all pdsh@oleg146-client: oleg146-server: ssh exited with exit code 1 Started lustre-OST0000 Stopping /mnt/lustre-ost1 (opts:) on oleg146-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg146-server: oleg146-server.virtnet: executing set_default_debug all all pdsh@oleg146-client: oleg146-server: ssh exited with exit code 1 Started lustre-OST0000 Stopping /mnt/lustre-ost1 (opts:) on oleg146-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg146-server: oleg146-server.virtnet: executing set_default_debug all all pdsh@oleg146-client: oleg146-server: ssh exited with exit code 1 Started lustre-OST0000 Stopping /mnt/lustre-ost1 (opts:) on oleg146-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg146-server: oleg146-server.virtnet: executing set_default_debug all all pdsh@oleg146-client: oleg146-server: ssh exited with exit code 1 Started lustre-OST0000 Stopping /mnt/lustre-ost1 (opts:) on oleg146-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg146-server: oleg146-server.virtnet: executing set_default_debug all all pdsh@oleg146-client: oleg146-server: ssh exited with exit code 1 Started lustre-OST0000 Stopping /mnt/lustre-ost1 (opts:) on oleg146-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg146-server: oleg146-server.virtnet: executing set_default_debug all all pdsh@oleg146-client: oleg146-server: ssh exited with exit code 1 Started lustre-OST0000 Stopping /mnt/lustre-ost1 (opts:) on oleg146-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg146-server: oleg146-server.virtnet: executing set_default_debug all all pdsh@oleg146-client: oleg146-server: ssh exited with exit code 1 Started lustre-OST0000 Stopping /mnt/lustre-ost1 (opts:) on oleg146-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg146-server: oleg146-server.virtnet: executing set_default_debug all all pdsh@oleg146-client: oleg146-server: ssh exited with exit code 1 Started lustre-OST0000 Stopping /mnt/lustre-ost1 (opts:) on oleg146-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg146-server: oleg146-server.virtnet: executing set_default_debug all all pdsh@oleg146-client: oleg146-server: ssh exited with exit code 1 Started lustre-OST0000 /home/green/git/lustre-release/lustre/tests/test-framework.sh: line 4540: 2744 Killed do_facet ost1 "(while true; do $LCTL get_param obdfilter.*.filesfree > /dev/null 2>&1; done) & pid=\\\$!; echo \\\$pid > $TMP/sanity_276_pid" PASS 276 (128s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 277: Direct IO shall drop page cache ====== 23:33:20 (1713411200) ldlm.namespaces.MGC192.168.201.146@tcp.lru_size=0 ldlm.namespaces.lustre-MDT0000-mdc-ffff8800b588d800.lru_size=0 ldlm.namespaces.lustre-MDT0001-mdc-ffff8800b588d800.lru_size=0 ldlm.namespaces.lustre-OST0000-osc-ffff8800b588d800.lru_size=0 ldlm.namespaces.lustre-OST0001-osc-ffff8800b588d800.lru_size=0 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0466246 s, 22.5 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0875568 s, 12.0 MB/s PASS 277 (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 278: Race starting MDS between MDTs stop/start ========================================================== 23:33:25 (1713411205) fail_loc=0x8000060c Stopping /mnt/lustre-mds1 (opts:) on oleg146-server Stopping /mnt/lustre-mds2 (opts:) on oleg146-server Starting MDTs Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg146-server: oleg146-server.virtnet: executing set_default_debug all all pdsh@oleg146-client: oleg146-server: ssh exited with exit code 1 Started lustre-MDT0000 fail_loc=0 Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg146-server: oleg146-server.virtnet: executing set_default_debug all all pdsh@oleg146-client: oleg146-server: ssh exited with exit code 1 Started lustre-MDT0001 affected facets: mds2 oleg146-server: oleg146-server.virtnet: executing _wait_recovery_complete *.lustre-MDT0001.recovery_status 1475 oleg146-server: *.lustre-MDT0001.recovery_status status: WAITING oleg146-server: Waiting 1470 secs for *.lustre-MDT0001.recovery_status recovery done. status: WAITING oleg146-server: *.lustre-MDT0001.recovery_status status: COMPLETE PASS 278 (21s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 280: Race between MGS umount and client llog processing ========================================================== 23:33:49 (1713411229) 192.168.201.146@tcp:/lustre /mnt/lustre lustre rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project 0 0 Stopping client oleg146-client.virtnet /mnt/lustre (opts:) fail_loc=0x8000015e fail_val=0 Starting client: oleg146-client.virtnet: -o user_xattr,flock oleg146-server@tcp:/lustre /mnt/lustre Stopping /mnt/lustre-mds1 (opts:) on oleg146-server Starting mgs: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 mount.lustre: mount oleg146-server@tcp:/lustre at /mnt/lustre failed: Input/output error Is the MGS running? oleg146-server: oleg146-server.virtnet: executing set_default_debug all all pdsh@oleg146-client: oleg146-server: ssh exited with exit code 1 Started lustre-MDT0000 Starting client: oleg146-client.virtnet: -o user_xattr,flock oleg146-server@tcp:/lustre /mnt/lustre PASS 280 (31s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 300a: basic striped dir sanity test ======= 23:34:22 (1713411262) File: '/mnt/lustre/d300a.sanity/striped_dir/a' Size: 4096 Blocks: 8 IO Block: 1048576 directory Device: 2c54f966h/743766374d Inode: 162129771554144257 Links: 2 Access: (0755/drwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2024-04-17 23:34:23.000000000 -0400 Modify: 2024-04-17 23:34:23.000000000 -0400 Change: 2024-04-17 23:34:23.000000000 -0400 Birth: - File: '/mnt/lustre/d300a.sanity/striped_dir/b' Size: 4096 Blocks: 8 IO Block: 1048576 directory Device: 2c54f966h/743766374d Inode: 144115540816822275 Links: 2 Access: (0755/drwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2024-04-17 23:34:23.000000000 -0400 Modify: 2024-04-17 23:34:23.000000000 -0400 Change: 2024-04-17 23:34:23.000000000 -0400 Birth: - open(/mnt/lustre/d300a.sanity/striped_dir/f0) error: Permission denied total: 0 open/close in 0.01 seconds: 0.00 ops/second File: '/mnt/lustre/d300a.sanity/striped_dir/a' Size: 4096 Blocks: 8 IO Block: 1048576 directory Device: 2c54f966h/743766374d Inode: 144115540816822276 Links: 2 Access: (0755/drwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2024-04-17 23:34:23.000000000 -0400 Modify: 2024-04-17 23:34:23.000000000 -0400 Change: 2024-04-17 23:34:23.000000000 -0400 Birth: - File: '/mnt/lustre/d300a.sanity/striped_dir/b' Size: 4096 Blocks: 8 IO Block: 1048576 directory Device: 2c54f966h/743766374d Inode: 162129771554144259 Links: 2 Access: (0755/drwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2024-04-17 23:34:23.000000000 -0400 Modify: 2024-04-17 23:34:23.000000000 -0400 Change: 2024-04-17 23:34:23.000000000 -0400 Birth: - open(/mnt/lustre/d300a.sanity/striped_dir/f0) error: Permission denied total: 0 open/close in 0.01 seconds: 0.00 ops/second PASS 300a (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 300b: check ctime/mtime for striped dir === 23:34:28 (1713411268) striped dir -i0 -c2 -H fnv_1a_64 /mnt/lustre/d300b.sanity PASS 300b (23s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 300c: chown && check ls under striped directory ========================================================== 23:34:54 (1713411294) running as uid/gid/euid/egid 500/500/500/500, groups: [createmany] [-o] [/mnt/lustre/d300c.sanity/striped_dir/f] [5000] - open/close 1102 (time 1713411304.77 total 10.03 last 109.92) - open/close 2201 (time 1713411314.79 total 20.04 last 109.77) - open/close 3286 (time 1713411324.79 total 30.04 last 108.45) - open/close 4383 (time 1713411334.79 total 40.04 last 109.68) total: 5000 open/close in 45.72 seconds: 109.37 ops/second PASS 300c (76s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 300d: check default stripe under striped directory ========================================================== 23:36:11 (1713411371) total 8 drwxr-xr-x 2 root root 8192 Apr 17 23:36 striped_dir lmv_stripe_count: 0 lmv_stripe_offset: 1 lmv_hash_type: none total 0 lmv_stripe_count: 2 lmv_stripe_offset: 0 lmv_hash_type: all_char mdtidx FID[seq:oid:ver] 0 [0x200005220:0x7:0x0] 1 [0x240002b10:0x7:0x0] total: 10 open/close in 0.09 seconds: 106.48 ops/second total 16 drwxr-xr-x 2 root root 8192 Apr 17 23:36 remote_striped_dir drwxr-xr-x 2 root root 8192 Apr 17 23:36 striped_dir lmv_stripe_count: 0 lmv_stripe_offset: 1 lmv_hash_type: none total 0 lmv_stripe_count: 2 lmv_stripe_offset: 1 lmv_hash_type: crush mdtidx FID[seq:oid:ver] 1 [0x240002b12:0x4:0x0] 0 [0x200005222:0x4:0x0] total: 10 open/close in 0.09 seconds: 110.32 ops/second PASS 300d (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 300e: check rename under striped directory ========================================================== 23:36:17 (1713411377) PASS 300e (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 300f: check rename cross striped directory ========================================================== 23:36:23 (1713411383) PASS 300f (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 300g: check default striped directory for normal directory ========================================================== 23:36:28 (1713411388) checking normal_dir 2 1 total: 10 open/close in 0.09 seconds: 113.21 ops/second - unlinked 0 (time 1713411389 ; total 0 ; last 0) total: 10 unlinks in 0 seconds: inf unlinks/second checking normal_dir 1 0 total: 10 open/close in 0.09 seconds: 110.43 ops/second - unlinked 0 (time 1713411390 ; total 0 ; last 0) total: 10 unlinks in 0 seconds: inf unlinks/second checking normal_dir -1 1 total: 10 open/close in 0.06 seconds: 161.61 ops/second - unlinked 0 (time 1713411391 ; total 0 ; last 0) total: 10 unlinks in 0 seconds: inf unlinks/second checking normal_dir 2 -1 total: 10 open/close in 0.09 seconds: 108.40 ops/second - unlinked 0 (time 1713411392 ; total 0 ; last 0) total: 10 unlinks in 0 seconds: inf unlinks/second delete default stripeEA PASS 300g (6s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 300h: check default striped directory for striped directory ========================================================== 23:36:37 (1713411397) checking striped_dir 2 1 total: 10 open/close in 0.09 seconds: 109.74 ops/second - unlinked 0 (time 1713411398 ; total 0 ; last 0) total: 10 unlinks in 0 seconds: inf unlinks/second checking striped_dir 1 0 total: 10 open/close in 0.09 seconds: 112.23 ops/second - unlinked 0 (time 1713411398 ; total 0 ; last 0) total: 10 unlinks in 0 seconds: inf unlinks/second checking striped_dir -1 1 total: 10 open/close in 0.09 seconds: 106.94 ops/second - unlinked 0 (time 1713411399 ; total 0 ; last 0) total: 10 unlinks in 0 seconds: inf unlinks/second checking striped_dir 2 -1 total: 10 open/close in 0.09 seconds: 109.91 ops/second - unlinked 0 (time 1713411400 ; total 0 ; last 0) total: 10 unlinks in 0 seconds: inf unlinks/second PASS 300h (5s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 300i: client handle unknown hash type striped directory ========================================================== 23:36:45 (1713411405) total: 10 open/close in 0.12 seconds: 86.49 ops/second 192.168.201.146@tcp:/lustre /mnt/lustre lustre rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project 0 0 Stopping client oleg146-client.virtnet /mnt/lustre (opts:) Starting client: oleg146-client.virtnet: -o user_xattr,flock oleg146-server@tcp:/lustre /mnt/lustre /mnt/lustre/d300i.sanity/hashdir/d2 /mnt/lustre/d300i.sanity/hashdir/d1 /mnt/lustre/d300i.sanity/hashdir/d3 fail_loc=0x1901 fail_val=99 fail_loc=0x1901 fail_val=99 /mnt/lustre/d300i.sanity/striped_dir/f-0 has type file OK /mnt/lustre/d300i.sanity/striped_dir/f-1 has type file OK /mnt/lustre/d300i.sanity/striped_dir/f-2 has type file OK /mnt/lustre/d300i.sanity/striped_dir/f-3 has type file OK /mnt/lustre/d300i.sanity/striped_dir/f-4 has type file OK /mnt/lustre/d300i.sanity/striped_dir/f-5 has type file OK /mnt/lustre/d300i.sanity/striped_dir/f-6 has type file OK /mnt/lustre/d300i.sanity/striped_dir/f-7 has type file OK /mnt/lustre/d300i.sanity/striped_dir/f-8 has type file OK /mnt/lustre/d300i.sanity/striped_dir/f-9 has type file OK touch: cannot touch '/mnt/lustre/d300i.sanity/striped_dir/f0': Bad file descriptor fail_loc=0 192.168.201.146@tcp:/lustre /mnt/lustre lustre rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project 0 0 Stopping client oleg146-client.virtnet /mnt/lustre (opts:) Starting client: oleg146-client.virtnet: -o user_xattr,flock oleg146-server@tcp:/lustre /mnt/lustre PASS 300i (5s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 300j: test large update record ============ 23:36:52 (1713411412) fail_loc=0x1702 total: 10 open/close in 0.11 seconds: 88.58 ops/second fail_loc=0 PASS 300j (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 300k: test large striped directory ======== 23:36:57 (1713411417) fail_loc=0x1703 fail_loc=0 lmv_stripe_count: 2 lmv_stripe_offset: 0 lmv_hash_type: crush mdtidx FID[seq:oid:ver] 0 [0x200005220:0x20:0x0] 1 [0x240002b10:0x20:0x0] PASS 300k (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 300l: non-root user to create dir under striped dir with stale layout ========================================================== 23:37:03 (1713411423) striped dir -i0 -c2 -H crush /mnt/lustre/d300l.sanity/striped_dir fail_loc=0x80000158 running as uid/gid/euid/egid 500/500/500/500, groups: [mkdir] [/mnt/lustre/d300l.sanity/striped_dir/test_dir] PASS 300l (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 300m: setstriped directory on single MDT FS ========================================================== 23:37:08 (1713411428) SKIP: sanity test_300m Only for single MDT SKIP 300m (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 300n: non-root user to create dir under striped dir with default EA ========================================================== 23:37:12 (1713411432) running as uid/gid/euid/egid 500/500/500/500, groups: [/home/green/git/lustre-release/lustre/utils/lfs] [setdirstripe] [-i0] [-c2] [/mnt/lustre/d300n.sanity/striped_dir] running as uid/gid/euid/egid 500/500/500/500, groups: [/home/green/git/lustre-release/lustre/utils/lfs] [setdirstripe] [-i] [1] [-c2] [-D] [/mnt/lustre/d300n.sanity/striped_dir] running as uid/gid/euid/egid 500/500/500/500, groups: [mkdir] [/mnt/lustre/d300n.sanity/striped_dir/test_dir] running as uid/gid/euid/egid 500/500/500/500, groups: [mkdir] [/mnt/lustre/d300n.sanity/striped_dir/test_dir1] running as uid/gid/euid/egid 500/500/500/500, groups: [mkdir] [/mnt/lustre/d300n.sanity/striped_dir/test_dir2] PASS 300n (5s) debug_raw_pointers=0 debug_raw_pointers=0 SKIP: sanity test_300o skipping SLOW test 300o debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 300p: create striped directory without space ========================================================== 23:37:21 (1713411441) fail_loc=0x80001704 PASS 300p (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 300q: create remote directory under orphan directory ========================================================== 23:37:26 (1713411446) mkdir: cannot create directory 'local_dir': No such file or directory lfs setdirstripe: dirstripe error on 'remote_dir': Stale file handle lfs setdirstripe: cannot create dir 'remote_dir': Stale file handle PASS 300q (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 300r: test -1 striped directory =========== 23:37:31 (1713411451) lmv_stripe_count: 2 lmv_stripe_offset: 0 lmv_hash_type: crush mdtidx FID[seq:oid:ver] 0 [0x200005220:0x25:0x0] 1 [0x240002b10:0x25:0x0] PASS 300r (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 300s: test lfs mkdir -c without -i ======== 23:37:36 (1713411456) lmv_stripe_count: 2 lmv_stripe_offset: 0 lmv_hash_type: crush mdtidx FID[seq:oid:ver] 0 [0x200005220:0x26:0x0] 1 [0x240002b10:0x26:0x0] PASS 300s (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 300t: test max_mdt_stripecount ============ 23:37:41 (1713411461) lod.lustre-MDT0000-mdtlov.max_mdt_stripecount=1 lod.lustre-MDT0001-mdtlov.max_mdt_stripecount=1 lod.lustre-MDT0000-mdtlov.max_mdt_stripecount=0 lod.lustre-MDT0001-mdtlov.max_mdt_stripecount=0 PASS 300t (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 300ua: basic overstriped dir sanity test == 23:37:47 (1713411467) File: '/mnt/lustre/d300ua.sanity/striped_dir/a' Size: 4096 Blocks: 8 IO Block: 1048576 directory Device: 2c54f966h/743766374d Inode: 162129771587698705 Links: 2 Access: (0755/drwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2024-04-17 23:37:48.000000000 -0400 Modify: 2024-04-17 23:37:48.000000000 -0400 Change: 2024-04-17 23:37:48.000000000 -0400 Birth: - File: '/mnt/lustre/d300ua.sanity/striped_dir/b' Size: 4096 Blocks: 8 IO Block: 1048576 directory Device: 2c54f966h/743766374d Inode: 144115540867153944 Links: 2 Access: (0755/drwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2024-04-17 23:37:48.000000000 -0400 Modify: 2024-04-17 23:37:48.000000000 -0400 Change: 2024-04-17 23:37:48.000000000 -0400 Birth: - open(/mnt/lustre/d300ua.sanity/striped_dir/f0) error: Permission denied total: 0 open/close in 0.01 seconds: 0.00 ops/second File: '/mnt/lustre/d300ua.sanity/striped_dir/a' Size: 4096 Blocks: 8 IO Block: 1048576 directory Device: 2c54f966h/743766374d Inode: 144115540867153945 Links: 2 Access: (0755/drwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2024-04-17 23:37:49.000000000 -0400 Modify: 2024-04-17 23:37:49.000000000 -0400 Change: 2024-04-17 23:37:49.000000000 -0400 Birth: - File: '/mnt/lustre/d300ua.sanity/striped_dir/b' Size: 4096 Blocks: 8 IO Block: 1048576 directory Device: 2c54f966h/743766374d Inode: 162129771587698707 Links: 2 Access: (0755/drwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2024-04-17 23:37:49.000000000 -0400 Modify: 2024-04-17 23:37:49.000000000 -0400 Change: 2024-04-17 23:37:49.000000000 -0400 Birth: - open(/mnt/lustre/d300ua.sanity/striped_dir/f0) error: Permission denied total: 0 open/close in 0.01 seconds: 0.00 ops/second File: '/mnt/lustre/d300ua.sanity/striped_dir/a' Size: 4096 Blocks: 8 IO Block: 1048576 directory Device: 2c54f966h/743766374d Inode: 162129771587698708 Links: 2 Access: (0755/drwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2024-04-17 23:37:49.000000000 -0400 Modify: 2024-04-17 23:37:49.000000000 -0400 Change: 2024-04-17 23:37:49.000000000 -0400 Birth: - File: '/mnt/lustre/d300ua.sanity/striped_dir/b' Size: 4096 Blocks: 8 IO Block: 1048576 directory Device: 2c54f966h/743766374d Inode: 144115540867153947 Links: 2 Access: (0755/drwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2024-04-17 23:37:49.000000000 -0400 Modify: 2024-04-17 23:37:49.000000000 -0400 Change: 2024-04-17 23:37:49.000000000 -0400 Birth: - open(/mnt/lustre/d300ua.sanity/striped_dir/f0) error: Permission denied total: 0 open/close in 0.01 seconds: 0.00 ops/second PASS 300ua (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 300ub: test MDT overstriping interface & limits ========================================================== 23:37:54 (1713411474) Testing invalid stripe count, failure expected Testing invalid striping, failure expected lmv_stripe_count: 10 lmv_stripe_offset: 0 lmv_hash_type: crush,overstriped mdtidx FID[seq:oid:ver] 0 [0x200005220:0x37:0x0] 1 [0x240002b10:0x32:0x0] 0 [0x200005220:0x38:0x0] 1 [0x240002b10:0x33:0x0] 0 [0x200005220:0x39:0x0] 1 [0x240002b10:0x34:0x0] 0 [0x200005220:0x3a:0x0] 1 [0x240002b10:0x35:0x0] 0 [0x200005220:0x3b:0x0] 1 [0x240002b10:0x36:0x0] stripes on MDT0: 5 PASS 300ub (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 300uc: test MDT overstriping as default & inheritance ========================================================== 23:37:59 (1713411479) PASS 300uc (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 300ud: dir split ========================== 23:38:04 (1713411484) lod.lustre-MDT0000-mdtlov.mdt_hash=crush lod.lustre-MDT0001-mdtlov.mdt_hash=crush mdt.lustre-MDT0000.enable_dir_restripe=1 mdt.lustre-MDT0001.enable_dir_restripe=1 total: 100 create in 0.46 seconds: 218.18 ops/second total: 100 mkdir in 0.98 seconds: 101.81 ops/second Waiting 100s for 'crush' Updated after 9s: want 'crush' got 'crush' 99 migrated when dir split 1 to 2 stripes Waiting 100s for 'crush' Updated after 9s: want 'crush' got 'crush' 67 migrated when dir split 2 to 3 stripes Waiting 100s for 'crush' Updated after 7s: want 'crush' got 'crush' 54 migrated when dir split 3 to 4 stripes Waiting 100s for 'crush' Updated after 8s: want 'crush' got 'crush' 37 migrated when dir split 4 to 5 stripes Waiting 100s for 'crush' Updated after 9s: want 'crush' got 'crush' 31 migrated when dir split 5 to 6 stripes Waiting 100s for 'crush' Updated after 7s: want 'crush' got 'crush' 26 migrated when dir split 6 to 7 stripes Waiting 100s for 'crush' Updated after 8s: want 'crush' got 'crush' 30 migrated when dir split 7 to 8 stripes Waiting 100s for 'crush' Updated after 9s: want 'crush' got 'crush' 17 migrated when dir split 8 to 9 stripes Waiting 100s for 'crush' Updated after 7s: want 'crush' got 'crush' 18 migrated when dir split 9 to 10 stripes mdt.lustre-MDT0000.enable_dir_restripe=0 mdt.lustre-MDT0001.enable_dir_restripe=0 PASS 300ud (87s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 300ue: dir merge ========================== 23:39:33 (1713411573) lod.lustre-MDT0000-mdtlov.mdt_hash=crush lod.lustre-MDT0001-mdtlov.mdt_hash=crush mdt.lustre-MDT0000.enable_dir_restripe=1 mdt.lustre-MDT0001.enable_dir_restripe=1 striped dir -i0 -C10 -H crush /mnt/lustre/d300ue.sanity total: 100 create in 0.51 seconds: 197.71 ops/second total: 100 mkdir in 0.48 seconds: 210.21 ops/second Waiting 100s for 'crush,fixed' Updated after 3s: want 'crush,fixed' got 'crush,fixed' 18 migrated when dir merge 10 to 9 stripes Waiting 100s for 'crush,fixed' Updated after 2s: want 'crush,fixed' got 'crush,fixed' 17 migrated when dir merge 9 to 8 stripes Waiting 100s for 'crush,fixed' Updated after 2s: want 'crush,fixed' got 'crush,fixed' 30 migrated when dir merge 8 to 7 stripes Waiting 100s for 'crush,fixed' Updated after 2s: want 'crush,fixed' got 'crush,fixed' 26 migrated when dir merge 7 to 6 stripes Waiting 100s for 'crush,fixed' Updated after 2s: want 'crush,fixed' got 'crush,fixed' 31 migrated when dir merge 6 to 5 stripes Waiting 100s for 'crush,fixed' Updated after 2s: want 'crush,fixed' got 'crush,fixed' 37 migrated when dir merge 5 to 4 stripes Waiting 100s for 'crush,fixed' Updated after 2s: want 'crush,fixed' got 'crush,fixed' 54 migrated when dir merge 4 to 3 stripes Waiting 100s for 'crush,fixed' Updated after 2s: want 'crush,fixed' got 'crush,fixed' 67 migrated when dir merge 3 to 2 stripes Waiting 100s for 'crush,fixed' Updated after 9s: want 'crush,fixed' got 'crush,fixed' 99 migrated when dir merge 2 to 1 stripes mdt.lustre-MDT0000.enable_dir_restripe=0 mdt.lustre-MDT0001.enable_dir_restripe=0 PASS 300ue (39s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 300uf: migrate with too many local locks == 23:40:15 (1713411615) PASS 300uf (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 300ug: migrate overstriped dirs =========== 23:40:21 (1713411621) lmv_stripe_count: 8 lmv_stripe_offset: 0 lmv_hash_type: crush,fixed mdtidx FID[seq:oid:ver] 0 [0x200005220:0x55c:0x0] 1 [0x240002b10:0x8e:0x0] 0 [0x200005220:0x55d:0x0] 1 [0x240002b10:0x8f:0x0] 0 [0x200005220:0x55e:0x0] 1 [0x240002b10:0x90:0x0] 0 [0x200005220:0x55f:0x0] 1 [0x240002b10:0x91:0x0] PASS 300ug (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 310a: open unlink remote file ============= 23:40:27 (1713411627) Can't lstat /mnt/lustre/d310a.sanity/src_dir/a: No such file or directory /mnt/lustre/d310a.sanity/tgt_dir/b has type file OK opening writing unlinking /mnt/lustre/d310a.sanity/tgt_dir/b accessing (1) seeking (1) accessing (2) fstat... reading comparing data truncating seeking (2) writing again seeking (3) reading again comparing data again closing SUCCESS - goto beer /mnt/lustre/d310a.sanity/tgt_dir/b: absent OK PASS 310a (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 310b: unlink remote file with multiple links while open ========================================================== 23:40:32 (1713411632) Can't lstat /mnt/lustre/d310b.sanity/src_dir/a: No such file or directory /mnt/lustre/d310b.sanity/tgt_dir/b has type file OK /mnt/lustre/d310b.sanity/tgt_dir/b has type file OK PASS 310b (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 310c: open-unlink remote file with multiple links ========================================================== 23:40:38 (1713411638) SKIP: sanity test_310c needs >= 4 MDTs SKIP 310c (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 311: disable OSP precreate, and unlink should destroy objs ========================================================== 23:40:41 (1713411641) total: 1000 open/close in 4.23 seconds: 236.50 ops/second - unlinked 0 (time 1713411650 ; total 0 ; last 0) total: 1000 unlinks in 2 seconds: 500.000000 unlinks/second waited 3 sec, old Iused 15671, new Iused 14807 PASS 311 (16s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 312: make sure ZFS adjusts its block size by write pattern ========================================================== 23:41:00 (1713411660) SKIP: sanity test_312 the test only applies to zfs SKIP 312 (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 313: io should fail after last_rcvd update fail ========================================================== 23:41:04 (1713411664) fail_loc=0x720 dd: failed to open '/mnt/lustre/f313.sanity': Input/output error fail_loc=0 PASS 313 (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 314: OSP shouldn't fail after last_rcvd update failure ========================================================== 23:41:09 (1713411669) fail_loc=0x720 Waiting for MDT destroys to complete fail_loc=0 PASS 314 (15s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 315: read should be accounted ============= 23:41:27 (1713411687) PASS 315 (6s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 316: lfs migrate of file with large_xattr enabled ========================================================== 23:41:36 (1713411696) PASS 316 (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 317: Verify blocks get correctly update after truncate ========================================================== 23:41:41 (1713411701) 1+0 records in 1+0 records out 5242880 bytes (5.2 MB) copied, 0.213682 s, 24.5 MB/s /mnt/lustre/f317.sanity has size 2097152 OK /mnt/lustre/f317.sanity has size 4097 OK /mnt/lustre/f317.sanity has size 4000 OK /mnt/lustre/f317.sanity has size 509 OK /mnt/lustre/f317.sanity has size 0 OK 1+0 records in 1+0 records out 65536 bytes (66 kB) copied, 0.0230634 s, 2.8 MB/s /mnt/lustre/f317.sanity has size 331775 OK PASS 317 (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 318: Verify async readahead tunables ====== 23:41:47 (1713411707) llite.lustre-ffff88012a451000.max_read_ahead_async_active=256 llite.lustre-ffff88012a451000.max_read_ahead_async_active=0 llite.lustre-ffff88012a451000.max_read_ahead_async_active=512 llite.lustre-ffff88012a451000.max_read_ahead_async_active=2 error: set_param: setting /sys/fs/lustre/llite/lustre-ffff88012a451000/read_ahead_async_file_threshold_mb=65: Numerical result out of range error: set_param: setting 'llite/*/read_ahead_async_file_threshold_mb'='65': Numerical result out of range llite.lustre-ffff88012a451000.read_ahead_async_file_threshold_mb=64 llite.lustre-ffff88012a451000.read_ahead_async_file_threshold_mb=64 PASS 318 (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 319: lost lease lock on migrate error ===== 23:41:52 (1713411712) fail_val=5 fail_loc=0x8000032c 0+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0124434 s, 0.0 kB/s PASS 319 (8s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 350: force NID mismatch path to be exercised ========================================================== 23:42:02 (1713411722) fail_loc=0x1000e001 fail_val=100 /home/green/git/lustre-release/lustre/tests/sanity.sh: line 27414: 13469 Killed ls -lR $DIR/$tdir > /dev/null PASS 350 (112s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 360: ldiskfs unlink in a separate thread == 23:43:56 (1713411836) keep default fallocate mode: 0 osd-ldiskfs.delayed_unlink_mb=1MiB debug=+inode Count[0]: 0 Count[1]: 100 osd-ldiskfs.delayed_unlink_mb=1024 PASS 360 (12s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 398a: direct IO should cancel lock otherwise lockless ========================================================== 23:44:11 (1713411851) ldlm.namespaces.MGC192.168.201.146@tcp.lru_size=clear ldlm.namespaces.lustre-MDT0000-mdc-ffff88012a451000.lru_size=clear ldlm.namespaces.lustre-MDT0001-mdc-ffff88012a451000.lru_size=clear ldlm.namespaces.lustre-OST0000-osc-ffff88012a451000.lru_size=clear ldlm.namespaces.lustre-OST0001-osc-ffff88012a451000.lru_size=clear 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0441206 s, 23.8 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.083788 s, 12.5 MB/s ldlm.namespaces.lustre-OST0000-osc-ffff88012a451000.lru_size=clear 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0380887 s, 27.5 MB/s ldlm.namespaces.lustre-OST0000-osc-ffff88012a451000.lru_size=clear 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0363731 s, 28.8 MB/s PASS 398a (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 398b: DIO and buffer IO race ============== 23:44:15 (1713411855) /usr/bin/fio 48+0 records in 48+0 records out 50331648 bytes (50 MB) copied, 1.12235 s, 44.8 MB/s mix direct rw 4096 by fio with 4 jobs... mix buffer rw 4096 by fio with 4 jobs... rand-rw: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=16 ... fio-3.7 Starting 4 processes rand-rw: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=16 ... fio-3.7 Starting 4 processes rand-rw: (groupid=0, jobs=1): err= 0: pid=15871: Wed Apr 17 23:44:30 2024 read: IOPS=129, BW=520KiB/s (532kB/s)(5992KiB/11528msec) clat (usec): min=56, max=162997, avg=4401.63, stdev=4739.32 lat (usec): min=57, max=162997, avg=4402.38, stdev=4739.31 clat percentiles (usec): | 1.00th=[ 68], 5.00th=[ 1713], 10.00th=[ 2089], 20.00th=[ 2474], | 30.00th=[ 2868], 40.00th=[ 3294], 50.00th=[ 3884], 60.00th=[ 4555], | 70.00th=[ 5211], 80.00th=[ 5997], 90.00th=[ 7177], 95.00th=[ 8291], | 99.00th=[ 12256], 99.50th=[ 13435], 99.90th=[ 29230], 99.95th=[162530], | 99.99th=[162530] bw ( KiB/s): min= 256, max= 694, per=25.26%, avg=519.57, stdev=80.65, samples=23 iops : min= 64, max= 173, avg=129.87, stdev=20.11, samples=23 write: IOPS=136, BW=546KiB/s (559kB/s)(6296KiB/11528msec) clat (usec): min=421, max=23888, avg=3088.78, stdev=2771.37 lat (usec): min=421, max=23889, avg=3089.46, stdev=2771.36 clat percentiles (usec): | 1.00th=[ 494], 5.00th=[ 515], 10.00th=[ 537], 20.00th=[ 578], | 30.00th=[ 709], 40.00th=[ 1778], 50.00th=[ 2540], 60.00th=[ 3163], | 70.00th=[ 3982], 80.00th=[ 5080], 90.00th=[ 6915], 95.00th=[ 8455], | 99.00th=[11731], 99.50th=[13173], 99.90th=[19530], 99.95th=[23987], | 99.99th=[23987] bw ( KiB/s): min= 304, max= 702, per=25.33%, avg=545.30, stdev=82.25, samples=23 iops : min= 76, max= 175, avg=136.30, stdev=20.52, samples=23 lat (usec) : 100=1.79%, 250=0.26%, 500=1.14%, 750=15.36%, 1000=1.95% lat (msec) : 2=4.92%, 4=35.81%, 10=36.56%, 20=2.12%, 50=0.07% lat (msec) : 250=0.03% cpu : usr=0.19%, sys=22.68%, ctx=5888, majf=0, minf=36 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=1498,1574,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=16 rand-rw: (groupid=0, jobs=1): err= 0: pid=15873: Wed Apr 17 23:44:30 2024 read: IOPS=128, BW=512KiB/s (525kB/s)(5940KiB/11591msec) clat (usec): min=49, max=19590, avg=4230.55, stdev=2373.69 lat (usec): min=50, max=19591, avg=4231.22, stdev=2373.72 clat percentiles (usec): | 1.00th=[ 66], 5.00th=[ 1729], 10.00th=[ 2057], 20.00th=[ 2442], | 30.00th=[ 2835], 40.00th=[ 3261], 50.00th=[ 3785], 60.00th=[ 4359], | 70.00th=[ 5014], 80.00th=[ 5800], 90.00th=[ 7111], 95.00th=[ 8455], | 99.00th=[12649], 99.50th=[14091], 99.90th=[19006], 99.95th=[19530], | 99.99th=[19530] bw ( KiB/s): min= 256, max= 656, per=24.77%, avg=509.61, stdev=80.13, samples=23 iops : min= 64, max= 164, avg=127.30, stdev=20.01, samples=23 write: IOPS=136, BW=548KiB/s (561kB/s)(6348KiB/11591msec) clat (usec): min=414, max=158858, avg=3298.86, stdev=4814.97 lat (usec): min=415, max=158859, avg=3299.74, stdev=4815.02 clat percentiles (usec): | 1.00th=[ 490], 5.00th=[ 519], 10.00th=[ 537], 20.00th=[ 603], | 30.00th=[ 816], 40.00th=[ 1975], 50.00th=[ 2704], 60.00th=[ 3294], | 70.00th=[ 4113], 80.00th=[ 5276], 90.00th=[ 6980], 95.00th=[ 8586], | 99.00th=[ 11731], 99.50th=[ 14091], 99.90th=[ 29230], 99.95th=[158335], | 99.99th=[158335] bw ( KiB/s): min= 336, max= 696, per=25.19%, avg=542.26, stdev=75.88, samples=23 iops : min= 84, max= 174, avg=135.48, stdev=18.91, samples=23 lat (usec) : 50=0.03%, 100=1.89%, 250=0.10%, 500=1.17%, 750=13.38% lat (usec) : 1000=2.70% lat (msec) : 2=5.83%, 4=36.17%, 10=36.10%, 20=2.57%, 50=0.03% lat (msec) : 250=0.03% cpu : usr=0.18%, sys=22.72%, ctx=6102, majf=0, minf=32 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=1485,1587,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=16 rand-rw: (groupid=0, jobs=1): err= 0: pid=15875: Wed Apr 17 23:44:30 2024 read: IOPS=129, BW=516KiB/s (529kB/s)(6028KiB/11672msec) clat (usec): min=46, max=161399, avg=4311.30, stdev=4743.74 lat (usec): min=47, max=161400, avg=4312.10, stdev=4743.71 clat percentiles (usec): | 1.00th=[ 61], 5.00th=[ 519], 10.00th=[ 2040], 20.00th=[ 2442], | 30.00th=[ 2835], 40.00th=[ 3228], 50.00th=[ 3752], 60.00th=[ 4424], | 70.00th=[ 5014], 80.00th=[ 5735], 90.00th=[ 7177], 95.00th=[ 8225], | 99.00th=[ 13042], 99.50th=[ 15008], 99.90th=[ 30540], 99.95th=[160433], | 99.99th=[160433] bw ( KiB/s): min= 280, max= 624, per=24.86%, avg=511.43, stdev=74.41, samples=23 iops : min= 70, max= 156, avg=127.78, stdev=18.60, samples=23 write: IOPS=134, BW=536KiB/s (549kB/s)(6260KiB/11672msec) clat (usec): min=404, max=19548, avg=3259.27, stdev=2873.90 lat (usec): min=404, max=19549, avg=3260.42, stdev=2873.53 clat percentiles (usec): | 1.00th=[ 490], 5.00th=[ 519], 10.00th=[ 537], 20.00th=[ 594], | 30.00th=[ 758], 40.00th=[ 1926], 50.00th=[ 2638], 60.00th=[ 3261], | 70.00th=[ 4228], 80.00th=[ 5538], 90.00th=[ 7308], 95.00th=[ 8848], | 99.00th=[11469], 99.50th=[12780], 99.90th=[17695], 99.95th=[19530], | 99.99th=[19530] bw ( KiB/s): min= 336, max= 718, per=24.64%, avg=530.57, stdev=88.74, samples=23 iops : min= 84, max= 179, avg=132.57, stdev=22.16, samples=23 lat (usec) : 50=0.13%, 100=2.12%, 250=0.10%, 500=1.33%, 750=13.93% lat (usec) : 1000=2.44% lat (msec) : 2=5.27%, 4=35.74%, 10=36.20%, 20=2.64%, 50=0.07% lat (msec) : 250=0.03% cpu : usr=0.27%, sys=22.37%, ctx=6193, majf=0, minf=34 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=1507,1565,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=16 rand-rw: (groupid=0, jobs=1): err= 0: pid=15877: Wed Apr 17 23:44:30 2024 read: IOPS=129, BW=519KiB/s (531kB/s)(6052KiB/11662msec) clat (usec): min=47, max=17832, avg=4267.81, stdev=2351.01 lat (usec): min=48, max=17833, avg=4268.71, stdev=2350.94 clat percentiles (usec): | 1.00th=[ 65], 5.00th=[ 906], 10.00th=[ 2073], 20.00th=[ 2474], | 30.00th=[ 2835], 40.00th=[ 3294], 50.00th=[ 3884], 60.00th=[ 4490], | 70.00th=[ 5145], 80.00th=[ 5932], 90.00th=[ 7111], 95.00th=[ 8291], | 99.00th=[12387], 99.50th=[13829], 99.90th=[15401], 99.95th=[17957], | 99.99th=[17957] bw ( KiB/s): min= 200, max= 648, per=25.01%, avg=514.48, stdev=90.05, samples=23 iops : min= 50, max= 162, avg=128.52, stdev=22.49, samples=23 write: IOPS=133, BW=535KiB/s (548kB/s)(6236KiB/11662msec) clat (usec): min=433, max=163571, avg=3290.92, stdev=4954.10 lat (usec): min=434, max=163573, avg=3291.57, stdev=4954.13 clat percentiles (usec): | 1.00th=[ 494], 5.00th=[ 519], 10.00th=[ 537], 20.00th=[ 594], | 30.00th=[ 734], 40.00th=[ 1958], 50.00th=[ 2638], 60.00th=[ 3228], | 70.00th=[ 4080], 80.00th=[ 5407], 90.00th=[ 6980], 95.00th=[ 8455], | 99.00th=[ 12125], 99.50th=[ 14222], 99.90th=[ 31589], 99.95th=[162530], | 99.99th=[162530] bw ( KiB/s): min= 216, max= 744, per=24.64%, avg=530.48, stdev=106.64, samples=23 iops : min= 54, max= 186, avg=132.52, stdev=26.64, samples=23 lat (usec) : 50=0.10%, 100=2.08%, 250=0.20%, 500=0.98%, 750=14.81% lat (usec) : 1000=1.63% lat (msec) : 2=4.85%, 4=36.00%, 10=36.91%, 20=2.38%, 50=0.03% lat (msec) : 250=0.03% cpu : usr=0.15%, sys=22.39%, ctx=6003, majf=0, minf=33 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=1513,1559,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=16 Run status group 0 (all jobs): READ: bw=2057KiB/s (2107kB/s), 512KiB/s-520KiB/s (525kB/s-532kB/s), io=23.4MiB (24.6MB), run=11528-11672msec WRITE: bw=2154KiB/s (2206kB/s), 535KiB/s-548KiB/s (548kB/s-561kB/s), io=24.6MiB (25.7MB), run=11528-11672msec rand-rw: (groupid=0, jobs=1): err= 0: pid=15872: Wed Apr 17 23:44:34 2024 read: IOPS=92, BW=371KiB/s (380kB/s)(5992KiB/16160msec) clat (usec): min=1607, max=192689, avg=4388.06, stdev=5584.09 lat (usec): min=1608, max=192691, avg=4389.05, stdev=5584.13 clat percentiles (usec): | 1.00th=[ 1745], 5.00th=[ 1926], 10.00th=[ 2057], 20.00th=[ 2343], | 30.00th=[ 2540], 40.00th=[ 2802], 50.00th=[ 3326], 60.00th=[ 3916], | 70.00th=[ 4359], 80.00th=[ 5538], 90.00th=[ 8356], 95.00th=[ 10683], | 99.00th=[ 13698], 99.50th=[ 16057], 99.90th=[ 19530], 99.95th=[191890], | 99.99th=[191890] bw ( KiB/s): min= 176, max= 744, per=24.99%, avg=370.31, stdev=125.65, samples=32 iops : min= 44, max= 186, avg=92.47, stdev=31.45, samples=32 write: IOPS=97, BW=390KiB/s (399kB/s)(6296KiB/16160msec) clat (usec): min=2195, max=24623, avg=6044.83, stdev=3168.84 lat (usec): min=2196, max=24624, avg=6045.80, stdev=3168.82 clat percentiles (usec): | 1.00th=[ 2540], 5.00th=[ 2900], 10.00th=[ 3130], 20.00th=[ 3458], | 30.00th=[ 3851], 40.00th=[ 4424], 50.00th=[ 5145], 60.00th=[ 6194], | 70.00th=[ 6915], 80.00th=[ 7963], 90.00th=[10290], 95.00th=[12387], | 99.00th=[17433], 99.50th=[18744], 99.90th=[21890], 99.95th=[24511], | 99.99th=[24511] bw ( KiB/s): min= 192, max= 760, per=25.00%, avg=388.06, stdev=127.79, samples=32 iops : min= 48, max= 190, avg=96.91, stdev=31.96, samples=32 lat (msec) : 2=3.74%, 4=43.29%, 10=44.40%, 20=8.33%, 50=0.20% lat (msec) : 250=0.03% cpu : usr=0.19%, sys=11.61%, ctx=3609, majf=0, minf=34 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=1498,1574,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=16 rand-rw: (groupid=0, jobs=1): err= 0: pid=15874: Wed Apr 17 23:44:34 2024 read: IOPS=92, BW=371KiB/s (379kB/s)(5940KiB/16031msec) clat (usec): min=1570, max=161276, avg=4312.38, stdev=4935.64 lat (usec): min=1571, max=161277, avg=4313.29, stdev=4935.65 clat percentiles (usec): | 1.00th=[ 1729], 5.00th=[ 1893], 10.00th=[ 2057], 20.00th=[ 2311], | 30.00th=[ 2573], 40.00th=[ 2868], 50.00th=[ 3294], 60.00th=[ 3851], | 70.00th=[ 4293], 80.00th=[ 5276], 90.00th=[ 7898], 95.00th=[ 10683], | 99.00th=[ 15008], 99.50th=[ 16909], 99.90th=[ 23987], 99.95th=[160433], | 99.99th=[160433] bw ( KiB/s): min= 168, max= 680, per=24.99%, avg=370.31, stdev=115.02, samples=32 iops : min= 42, max= 170, avg=92.47, stdev=28.73, samples=32 write: IOPS=98, BW=396KiB/s (405kB/s)(6348KiB/16031msec) clat (usec): min=2197, max=31586, avg=6020.41, stdev=3124.78 lat (usec): min=2197, max=31587, avg=6021.25, stdev=3124.72 clat percentiles (usec): | 1.00th=[ 2409], 5.00th=[ 2900], 10.00th=[ 3163], 20.00th=[ 3556], | 30.00th=[ 3982], 40.00th=[ 4424], 50.00th=[ 5145], 60.00th=[ 6063], | 70.00th=[ 6915], 80.00th=[ 7963], 90.00th=[10028], 95.00th=[11863], | 99.00th=[16909], 99.50th=[18482], 99.90th=[29754], 99.95th=[31589], | 99.99th=[31589] bw ( KiB/s): min= 224, max= 776, per=25.49%, avg=395.56, stdev=124.42, samples=32 iops : min= 56, max= 194, avg=98.78, stdev=31.11, samples=32 lat (msec) : 2=3.91%, 4=42.38%, 10=45.74%, 20=7.71%, 50=0.23% lat (msec) : 250=0.03% cpu : usr=0.18%, sys=11.68%, ctx=3528, majf=0, minf=33 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=1485,1587,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=16 rand-rw: (groupid=0, jobs=1): err= 0: pid=15876: Wed Apr 17 23:44:34 2024 read: IOPS=93, BW=376KiB/s (385kB/s)(6028KiB/16050msec) clat (usec): min=1601, max=23962, avg=4249.69, stdev=2768.72 lat (usec): min=1602, max=23963, avg=4250.34, stdev=2768.71 clat percentiles (usec): | 1.00th=[ 1713], 5.00th=[ 1893], 10.00th=[ 2040], 20.00th=[ 2311], | 30.00th=[ 2540], 40.00th=[ 2835], 50.00th=[ 3261], 60.00th=[ 3851], | 70.00th=[ 4359], 80.00th=[ 5604], 90.00th=[ 8291], 95.00th=[ 9896], | 99.00th=[14877], 99.50th=[16581], 99.90th=[20841], 99.95th=[23987], | 99.99th=[23987] bw ( KiB/s): min= 160, max= 728, per=25.31%, avg=375.06, stdev=127.23, samples=32 iops : min= 40, max= 182, avg=93.62, stdev=31.83, samples=32 write: IOPS=97, BW=390KiB/s (399kB/s)(6260KiB/16050msec) clat (msec): min=2, max=163, avg= 6.12, stdev= 5.08 lat (msec): min=2, max=163, avg= 6.12, stdev= 5.08 clat percentiles (msec): | 1.00th=[ 3], 5.00th=[ 3], 10.00th=[ 4], 20.00th=[ 4], | 30.00th=[ 4], 40.00th=[ 5], 50.00th=[ 6], 60.00th=[ 7], | 70.00th=[ 7], 80.00th=[ 9], 90.00th=[ 11], 95.00th=[ 13], | 99.00th=[ 18], 99.50th=[ 21], 99.90th=[ 25], 99.95th=[ 163], | 99.99th=[ 163] bw ( KiB/s): min= 200, max= 776, per=25.03%, avg=388.53, stdev=130.11, samples=32 iops : min= 50, max= 194, avg=97.00, stdev=32.57, samples=32 lat (msec) : 2=4.00%, 4=43.65%, 10=44.86%, 20=7.16%, 50=0.29% lat (msec) : 250=0.03% cpu : usr=0.27%, sys=11.46%, ctx=3623, majf=0, minf=34 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=1507,1565,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=16 rand-rw: (groupid=0, jobs=1): err= 0: pid=15878: Wed Apr 17 23:44:34 2024 read: IOPS=93, BW=374KiB/s (383kB/s)(6052KiB/16196msec) clat (usec): min=1519, max=21399, avg=4284.53, stdev=2860.55 lat (usec): min=1519, max=21400, avg=4285.19, stdev=2860.53 clat percentiles (usec): | 1.00th=[ 1713], 5.00th=[ 1909], 10.00th=[ 2024], 20.00th=[ 2278], | 30.00th=[ 2507], 40.00th=[ 2802], 50.00th=[ 3195], 60.00th=[ 3851], | 70.00th=[ 4359], 80.00th=[ 5604], 90.00th=[ 8586], 95.00th=[10683], | 99.00th=[14222], 99.50th=[15401], 99.90th=[19006], 99.95th=[21365], | 99.99th=[21365] bw ( KiB/s): min= 176, max= 664, per=25.04%, avg=371.06, stdev=118.99, samples=32 iops : min= 44, max= 166, avg=92.66, stdev=29.78, samples=32 write: IOPS=96, BW=385KiB/s (394kB/s)(6236KiB/16196msec) clat (msec): min=2, max=163, avg= 6.18, stdev= 5.08 lat (msec): min=2, max=163, avg= 6.19, stdev= 5.08 clat percentiles (msec): | 1.00th=[ 3], 5.00th=[ 3], 10.00th=[ 4], 20.00th=[ 4], | 30.00th=[ 4], 40.00th=[ 5], 50.00th=[ 6], 60.00th=[ 7], | 70.00th=[ 7], 80.00th=[ 9], 90.00th=[ 11], 95.00th=[ 13], | 99.00th=[ 17], 99.50th=[ 21], 99.90th=[ 31], 99.95th=[ 163], | 99.99th=[ 163] bw ( KiB/s): min= 200, max= 808, per=24.73%, avg=383.81, stdev=132.91, samples=32 iops : min= 50, max= 202, avg=95.84, stdev=33.24, samples=32 lat (msec) : 2=4.04%, 4=43.39%, 10=43.98%, 20=8.30%, 50=0.26% lat (msec) : 250=0.03% cpu : usr=0.17%, sys=11.68%, ctx=3583, majf=0, minf=31 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=1513,1559,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=16 Run status group 0 (all jobs): READ: bw=1483KiB/s (1518kB/s), 371KiB/s-376KiB/s (379kB/s-385kB/s), io=23.4MiB (24.6MB), run=16031-16196msec WRITE: bw=1552KiB/s (1589kB/s), 385KiB/s-396KiB/s (394kB/s-405kB/s), io=24.6MiB (25.7MB), run=16031-16196msec mix direct rw 16384 by fio with 4 jobs... mix buffer rw 16384 by fio with 4 jobs... rand-rw: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=psync, iodepth=16 ... fio-3.7 Starting 4 processes rand-rw: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=psync, iodepth=16 ... fio-3.7 Starting 4 processes rand-rw: (groupid=0, jobs=1): err= 0: pid=15911: Wed Apr 17 23:44:41 2024 read: IOPS=60, BW=966KiB/s (989kB/s)(5920KiB/6127msec) clat (usec): min=69, max=48386, avg=10923.93, stdev=5145.46 lat (usec): min=70, max=48387, avg=10924.49, stdev=5145.50 clat percentiles (usec): | 1.00th=[ 133], 5.00th=[ 6521], 10.00th=[ 7373], 20.00th=[ 8291], | 30.00th=[ 9110], 40.00th=[ 9634], 50.00th=[10159], 60.00th=[10552], | 70.00th=[11338], 80.00th=[11994], 90.00th=[14484], 95.00th=[17171], | 99.00th=[36963], 99.50th=[47449], 99.90th=[48497], 99.95th=[48497], | 99.99th=[48497] bw ( KiB/s): min= 736, max= 1152, per=25.01%, avg=960.00, stdev=151.33, samples=12 iops : min= 46, max= 72, avg=60.00, stdev= 9.46, samples=12 write: IOPS=64, BW=1039KiB/s (1064kB/s)(6368KiB/6127msec) clat (usec): min=708, max=54233, avg=5222.00, stdev=5706.47 lat (usec): min=708, max=54234, avg=5222.93, stdev=5706.45 clat percentiles (usec): | 1.00th=[ 766], 5.00th=[ 799], 10.00th=[ 865], 20.00th=[ 1991], | 30.00th=[ 2311], 40.00th=[ 2835], 50.00th=[ 3195], 60.00th=[ 4146], | 70.00th=[ 5473], 80.00th=[ 7504], 90.00th=[11863], 95.00th=[16581], | 99.00th=[31589], 99.50th=[40109], 99.90th=[54264], 99.95th=[54264], | 99.99th=[54264] bw ( KiB/s): min= 768, max= 1408, per=25.81%, avg=1042.67, stdev=215.08, samples=12 iops : min= 48, max= 88, avg=65.17, stdev=13.44, samples=12 lat (usec) : 100=0.39%, 250=0.26%, 750=0.39%, 1000=6.51% lat (msec) : 2=3.78%, 4=20.05%, 10=36.59%, 20=29.17%, 50=2.73% lat (msec) : 100=0.13% cpu : usr=0.15%, sys=19.80%, ctx=2730, majf=0, minf=35 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=370,398,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=16 rand-rw: (groupid=0, jobs=1): err= 0: pid=15912: Wed Apr 17 23:44:41 2024 read: IOPS=57, BW=921KiB/s (943kB/s)(5744KiB/6238msec) clat (usec): min=59, max=51113, avg=10717.63, stdev=4126.58 lat (usec): min=59, max=51114, avg=10718.23, stdev=4126.57 clat percentiles (usec): | 1.00th=[ 74], 5.00th=[ 6652], 10.00th=[ 7439], 20.00th=[ 8356], | 30.00th=[ 9110], 40.00th=[ 9765], 50.00th=[10421], 60.00th=[10814], | 70.00th=[11469], 80.00th=[12518], 90.00th=[14222], 95.00th=[16581], | 99.00th=[23987], 99.50th=[30016], 99.90th=[51119], 99.95th=[51119], | 99.99th=[51119] bw ( KiB/s): min= 480, max= 1149, per=23.61%, avg=906.25, stdev=193.79, samples=12 iops : min= 30, max= 71, avg=56.50, stdev=12.04, samples=12 write: IOPS=65, BW=1049KiB/s (1074kB/s)(6544KiB/6238msec) clat (usec): min=738, max=77069, avg=5827.42, stdev=7952.00 lat (usec): min=739, max=77070, avg=5828.38, stdev=7952.01 clat percentiles (usec): | 1.00th=[ 766], 5.00th=[ 791], 10.00th=[ 840], 20.00th=[ 1336], | 30.00th=[ 2245], 40.00th=[ 2638], 50.00th=[ 3261], 60.00th=[ 4178], | 70.00th=[ 5276], 80.00th=[ 7308], 90.00th=[13698], 95.00th=[18482], | 99.00th=[43254], 99.50th=[45351], 99.90th=[77071], 99.95th=[77071], | 99.99th=[77071] bw ( KiB/s): min= 768, max= 1408, per=25.21%, avg=1018.25, stdev=216.20, samples=12 iops : min= 48, max= 88, avg=63.50, stdev=13.56, samples=12 lat (usec) : 100=0.91%, 250=0.13%, 750=0.26%, 1000=8.59% lat (msec) : 2=3.12%, 4=19.01%, 10=33.59%, 20=30.73%, 50=3.39% lat (msec) : 100=0.26% cpu : usr=0.08%, sys=18.41%, ctx=2730, majf=0, minf=32 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=359,409,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=16 rand-rw: (groupid=0, jobs=1): err= 0: pid=15913: Wed Apr 17 23:44:41 2024 read: IOPS=63, BW=1018KiB/s (1042kB/s)(6256KiB/6147msec) clat (usec): min=62, max=36541, avg=10665.08, stdev=3988.83 lat (usec): min=63, max=36542, avg=10666.75, stdev=3988.99 clat percentiles (usec): | 1.00th=[ 81], 5.00th=[ 6849], 10.00th=[ 7570], 20.00th=[ 8455], | 30.00th=[ 8979], 40.00th=[ 9503], 50.00th=[10028], 60.00th=[10552], | 70.00th=[11207], 80.00th=[12256], 90.00th=[14091], 95.00th=[17433], | 99.00th=[26608], 99.50th=[29754], 99.90th=[36439], 99.95th=[36439], | 99.99th=[36439] bw ( KiB/s): min= 640, max= 1344, per=26.47%, avg=1016.00, stdev=208.34, samples=12 iops : min= 40, max= 84, avg=63.50, stdev=13.02, samples=12 write: IOPS=61, BW=981KiB/s (1005kB/s)(6032KiB/6147msec) clat (usec): min=687, max=74493, avg=5220.20, stdev=7437.90 lat (usec): min=688, max=74494, avg=5222.58, stdev=7438.22 clat percentiles (usec): | 1.00th=[ 742], 5.00th=[ 791], 10.00th=[ 816], 20.00th=[ 1057], | 30.00th=[ 2147], 40.00th=[ 2474], 50.00th=[ 2966], 60.00th=[ 3785], | 70.00th=[ 4948], 80.00th=[ 6980], 90.00th=[11207], 95.00th=[16909], | 99.00th=[36439], 99.50th=[65274], 99.90th=[74974], 99.95th=[74974], | 99.99th=[74974] bw ( KiB/s): min= 640, max= 1344, per=23.83%, avg=962.67, stdev=218.51, samples=12 iops : min= 40, max= 84, avg=60.17, stdev=13.66, samples=12 lat (usec) : 100=0.52%, 250=0.39%, 750=0.52%, 1000=8.46% lat (msec) : 2=3.91%, 4=17.97%, 10=37.24%, 20=27.73%, 50=2.86% lat (msec) : 100=0.39% cpu : usr=0.07%, sys=17.80%, ctx=2752, majf=0, minf=34 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=391,377,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=16 rand-rw: (groupid=0, jobs=1): err= 0: pid=15914: Wed Apr 17 23:44:41 2024 read: IOPS=61, BW=992KiB/s (1016kB/s)(6032KiB/6082msec) clat (usec): min=64, max=57284, avg=10726.45, stdev=4601.55 lat (usec): min=64, max=57285, avg=10727.16, stdev=4601.65 clat percentiles (usec): | 1.00th=[ 73], 5.00th=[ 6718], 10.00th=[ 7635], 20.00th=[ 8455], | 30.00th=[ 9110], 40.00th=[ 9634], 50.00th=[10028], 60.00th=[10814], | 70.00th=[11600], 80.00th=[12387], 90.00th=[14091], 95.00th=[16909], | 99.00th=[30278], 99.50th=[36439], 99.90th=[57410], 99.95th=[57410], | 99.99th=[57410] bw ( KiB/s): min= 608, max= 1280, per=25.63%, avg=984.00, stdev=178.49, samples=12 iops : min= 38, max= 80, avg=61.50, stdev=11.16, samples=12 write: IOPS=64, BW=1029KiB/s (1053kB/s)(6256KiB/6082msec) clat (usec): min=702, max=70298, avg=5196.24, stdev=6930.89 lat (usec): min=703, max=70300, avg=5197.21, stdev=6930.91 clat percentiles (usec): | 1.00th=[ 750], 5.00th=[ 807], 10.00th=[ 840], 20.00th=[ 1303], | 30.00th=[ 2147], 40.00th=[ 2409], 50.00th=[ 3032], 60.00th=[ 3949], | 70.00th=[ 5211], 80.00th=[ 7111], 90.00th=[11207], 95.00th=[14746], | 99.00th=[43779], 99.50th=[52167], 99.90th=[70779], 99.95th=[70779], | 99.99th=[70779] bw ( KiB/s): min= 672, max= 1376, per=25.35%, avg=1024.00, stdev=197.73, samples=12 iops : min= 42, max= 86, avg=64.00, stdev=12.36, samples=12 lat (usec) : 100=1.30%, 250=0.13%, 750=0.52%, 1000=8.33% lat (msec) : 2=3.91%, 4=17.97%, 10=36.72%, 20=28.65%, 50=2.08% lat (msec) : 100=0.39% cpu : usr=0.05%, sys=19.36%, ctx=2729, majf=0, minf=33 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=377,391,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=16 Run status group 0 (all jobs): READ: bw=3840KiB/s (3932kB/s), 921KiB/s-1018KiB/s (943kB/s-1042kB/s), io=23.4MiB (24.5MB), run=6082-6238msec WRITE: bw=4040KiB/s (4137kB/s), 981KiB/s-1049KiB/s (1005kB/s-1074kB/s), io=24.6MiB (25.8MB), run=6082-6238msec rand-rw: (groupid=0, jobs=1): err= 0: pid=15916: Wed Apr 17 23:44:41 2024 read: IOPS=60, BW=973KiB/s (997kB/s)(5920KiB/6083msec) clat (usec): min=1765, max=51153, avg=6138.00, stdev=5974.43 lat (usec): min=1765, max=51154, avg=6138.67, stdev=5974.36 clat percentiles (usec): | 1.00th=[ 1975], 5.00th=[ 2073], 10.00th=[ 2245], 20.00th=[ 2474], | 30.00th=[ 2737], 40.00th=[ 3163], 50.00th=[ 3752], 60.00th=[ 4948], | 70.00th=[ 6783], 80.00th=[ 8848], 90.00th=[11994], 95.00th=[15795], | 99.00th=[33817], 99.50th=[47449], 99.90th=[51119], 99.95th=[51119], | 99.99th=[51119] bw ( KiB/s): min= 640, max= 1248, per=25.33%, avg=972.83, stdev=183.15, samples=12 iops : min= 40, max= 78, avg=60.67, stdev=11.41, samples=12 write: IOPS=65, BW=1047KiB/s (1072kB/s)(6368KiB/6083msec) clat (usec): min=2516, max=49917, avg=9560.87, stdev=6579.22 lat (usec): min=2516, max=49918, avg=9561.95, stdev=6579.24 clat percentiles (usec): | 1.00th=[ 2769], 5.00th=[ 3228], 10.00th=[ 3818], 20.00th=[ 4752], | 30.00th=[ 5800], 40.00th=[ 6718], 50.00th=[ 7963], 60.00th=[ 8979], | 70.00th=[10552], 80.00th=[12911], 90.00th=[18482], 95.00th=[21365], | 99.00th=[42206], 99.50th=[44303], 99.90th=[50070], 99.95th=[50070], | 99.99th=[50070] bw ( KiB/s): min= 800, max= 1469, per=25.99%, avg=1050.17, stdev=210.80, samples=12 iops : min= 50, max= 91, avg=65.50, stdev=13.04, samples=12 lat (msec) : 2=0.52%, 4=31.38%, 10=43.49%, 20=20.31%, 50=4.17% lat (msec) : 100=0.13% cpu : usr=0.07%, sys=9.36%, ctx=1179, majf=0, minf=31 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=370,398,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=16 rand-rw: (groupid=0, jobs=1): err= 0: pid=15917: Wed Apr 17 23:44:41 2024 read: IOPS=57, BW=921KiB/s (943kB/s)(5744KiB/6235msec) clat (usec): min=1758, max=53430, avg=6183.71, stdev=5384.60 lat (usec): min=1759, max=53430, avg=6184.32, stdev=5384.57 clat percentiles (usec): | 1.00th=[ 1827], 5.00th=[ 2040], 10.00th=[ 2147], 20.00th=[ 2474], | 30.00th=[ 2737], 40.00th=[ 3130], 50.00th=[ 3982], 60.00th=[ 5604], | 70.00th=[ 7832], 80.00th=[ 9896], 90.00th=[11994], 95.00th=[14091], | 99.00th=[25560], 99.50th=[39584], 99.90th=[53216], 99.95th=[53216], | 99.99th=[53216] bw ( KiB/s): min= 608, max= 1312, per=23.88%, avg=917.33, stdev=215.46, samples=12 iops : min= 38, max= 82, avg=57.33, stdev=13.47, samples=12 write: IOPS=65, BW=1050KiB/s (1075kB/s)(6544KiB/6235msec) clat (usec): min=2384, max=70949, avg=9799.75, stdev=7452.74 lat (usec): min=2386, max=70950, avg=9800.88, stdev=7452.80 clat percentiles (usec): | 1.00th=[ 2737], 5.00th=[ 3228], 10.00th=[ 3589], 20.00th=[ 4948], | 30.00th=[ 5669], 40.00th=[ 6783], 50.00th=[ 8160], 60.00th=[ 9110], | 70.00th=[11338], 80.00th=[13829], 90.00th=[16581], 95.00th=[21365], | 99.00th=[47973], 99.50th=[51119], 99.90th=[70779], 99.95th=[70779], | 99.99th=[70779] bw ( KiB/s): min= 832, max= 1344, per=25.34%, avg=1024.00, stdev=189.07, samples=12 iops : min= 52, max= 84, avg=64.00, stdev=11.82, samples=12 lat (msec) : 2=1.56%, 4=29.43%, 10=41.41%, 20=23.70%, 50=3.39% lat (msec) : 100=0.52% cpu : usr=0.10%, sys=9.21%, ctx=1240, majf=0, minf=33 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=359,409,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=16 rand-rw: (groupid=0, jobs=1): err= 0: pid=15918: Wed Apr 17 23:44:41 2024 read: IOPS=65, BW=1041KiB/s (1066kB/s)(6256KiB/6008msec) clat (usec): min=1770, max=47270, avg=6297.25, stdev=5660.02 lat (usec): min=1770, max=47271, avg=6297.80, stdev=5660.03 clat percentiles (usec): | 1.00th=[ 1876], 5.00th=[ 2114], 10.00th=[ 2212], 20.00th=[ 2474], | 30.00th=[ 2737], 40.00th=[ 3359], 50.00th=[ 4146], 60.00th=[ 5342], | 70.00th=[ 7373], 80.00th=[ 9372], 90.00th=[12649], 95.00th=[16712], | 99.00th=[32113], 99.50th=[46400], 99.90th=[47449], 99.95th=[47449], | 99.99th=[47449] bw ( KiB/s): min= 702, max= 1373, per=27.12%, avg=1041.75, stdev=203.58, samples=12 iops : min= 43, max= 85, avg=64.83, stdev=12.68, samples=12 write: IOPS=62, BW=1004KiB/s (1028kB/s)(6032KiB/6008msec) clat (usec): min=2732, max=76409, avg=9386.41, stdev=7073.77 lat (usec): min=2733, max=76410, avg=9387.62, stdev=7073.76 clat percentiles (usec): | 1.00th=[ 2868], 5.00th=[ 3228], 10.00th=[ 3556], 20.00th=[ 4293], | 30.00th=[ 5276], 40.00th=[ 6456], 50.00th=[ 7832], 60.00th=[ 9110], | 70.00th=[10290], 80.00th=[12911], 90.00th=[17171], 95.00th=[21890], | 99.00th=[35390], 99.50th=[50070], 99.90th=[76022], 99.95th=[76022], | 99.99th=[76022] bw ( KiB/s): min= 576, max= 1277, per=24.72%, avg=999.08, stdev=221.07, samples=12 iops : min= 36, max= 79, avg=62.17, stdev=13.66, samples=12 lat (msec) : 2=1.30%, 4=32.16%, 10=42.06%, 20=20.18%, 50=4.04% lat (msec) : 100=0.26% cpu : usr=0.05%, sys=9.44%, ctx=1192, majf=0, minf=34 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=391,377,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=16 rand-rw: (groupid=0, jobs=1): err= 0: pid=15919: Wed Apr 17 23:44:41 2024 read: IOPS=62, BW=1001KiB/s (1025kB/s)(6032KiB/6026msec) clat (usec): min=1846, max=54846, avg=6501.25, stdev=5703.27 lat (usec): min=1846, max=54847, avg=6501.82, stdev=5703.28 clat percentiles (usec): | 1.00th=[ 1876], 5.00th=[ 2057], 10.00th=[ 2212], 20.00th=[ 2507], | 30.00th=[ 2900], 40.00th=[ 3458], 50.00th=[ 4359], 60.00th=[ 6128], | 70.00th=[ 8094], 80.00th=[ 9765], 90.00th=[12649], 95.00th=[15270], | 99.00th=[33162], 99.50th=[36963], 99.90th=[54789], 99.95th=[54789], | 99.99th=[54789] bw ( KiB/s): min= 704, max= 1216, per=25.76%, avg=989.33, stdev=144.05, samples=12 iops : min= 44, max= 76, avg=61.83, stdev= 9.00, samples=12 write: IOPS=64, BW=1038KiB/s (1063kB/s)(6256KiB/6026msec) clat (usec): min=2536, max=48135, avg=9128.41, stdev=5921.04 lat (usec): min=2537, max=48136, avg=9129.51, stdev=5921.08 clat percentiles (usec): | 1.00th=[ 2638], 5.00th=[ 3195], 10.00th=[ 3556], 20.00th=[ 4752], | 30.00th=[ 5735], 40.00th=[ 6652], 50.00th=[ 7701], 60.00th=[ 8848], | 70.00th=[10159], 80.00th=[12125], 90.00th=[16057], 95.00th=[21103], | 99.00th=[33162], 99.50th=[36439], 99.90th=[47973], 99.95th=[47973], | 99.99th=[47973] bw ( KiB/s): min= 800, max= 1344, per=25.67%, avg=1037.33, stdev=171.76, samples=12 iops : min= 50, max= 84, avg=64.83, stdev=10.74, samples=12 lat (msec) : 2=1.30%, 4=28.65%, 10=44.92%, 20=20.96%, 50=4.04% lat (msec) : 100=0.13% cpu : usr=0.03%, sys=9.51%, ctx=1224, majf=0, minf=33 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=377,391,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=16 Run status group 0 (all jobs): READ: bw=3842KiB/s (3934kB/s), 921KiB/s-1041KiB/s (943kB/s-1066kB/s), io=23.4MiB (24.5MB), run=6008-6235msec WRITE: bw=4042KiB/s (4139kB/s), 1004KiB/s-1050KiB/s (1028kB/s-1075kB/s), io=24.6MiB (25.8MB), run=6008-6235msec mix direct rw 1048576 by fio with 4 jobs... mix buffer rw 1048576 by fio with 4 jobs... rand-rw: (g=0): rw=randrw, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=psync, iodepth=16 ... fio-3.7 Starting 4 processes rand-rw: (g=0): rw=randrw, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=psync, iodepth=16 ... fio-3.7 Starting 4 processes rand-rw: (groupid=0, jobs=1): err= 0: pid=15930: Wed Apr 17 23:44:44 2024 read: IOPS=5, BW=5255KiB/s (5381kB/s)(7168KiB/1364msec) clat (usec): min=320, max=837857, avg=171365.92, stdev=307929.77 lat (usec): min=320, max=837858, avg=171366.82, stdev=307929.63 clat percentiles (usec): | 1.00th=[ 322], 5.00th=[ 322], 10.00th=[ 322], 20.00th=[ 326], | 30.00th=[ 330], 40.00th=[ 330], 50.00th=[ 433], 60.00th=[119014], | 70.00th=[119014], 80.00th=[242222], 90.00th=[834667], 95.00th=[834667], | 99.00th=[834667], 99.50th=[834667], 99.90th=[834667], 99.95th=[834667], | 99.99th=[834667] bw ( KiB/s): min= 1768, max=10219, per=39.92%, avg=5993.50, stdev=5975.76, samples=2 iops : min= 1, max= 9, avg= 5.00, stdev= 5.66, samples=2 write: IOPS=3, BW=3754KiB/s (3844kB/s)(5120KiB/1364msec) clat (usec): min=18665, max=60634, avg=32086.35, stdev=18111.78 lat (usec): min=18683, max=60707, avg=32114.57, stdev=18134.51 clat percentiles (usec): | 1.00th=[18744], 5.00th=[18744], 10.00th=[18744], 20.00th=[18744], | 30.00th=[18744], 40.00th=[18744], 50.00th=[22938], 60.00th=[22938], | 70.00th=[39584], 80.00th=[39584], 90.00th=[60556], 95.00th=[60556], | 99.00th=[60556], 99.50th=[60556], 99.90th=[60556], 99.95th=[60556], | 99.99th=[60556] bw ( KiB/s): min= 3537, max= 6131, per=23.00%, avg=4834.00, stdev=1834.23, samples=2 iops : min= 3, max= 5, avg= 4.00, stdev= 1.41, samples=2 lat (usec) : 500=33.33% lat (msec) : 20=16.67%, 50=16.67%, 100=8.33%, 250=16.67%, 1000=8.33% cpu : usr=0.00%, sys=27.95%, ctx=404, majf=0, minf=32 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=7,5,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=16 rand-rw: (groupid=0, jobs=1): err= 0: pid=15931: Wed Apr 17 23:44:44 2024 read: IOPS=4, BW=4249KiB/s (4351kB/s)(2048KiB/482msec) clat (usec): min=463, max=131542, avg=66002.84, stdev=92686.38 lat (usec): min=464, max=131544, avg=66004.58, stdev=92687.85 clat percentiles (usec): | 1.00th=[ 465], 5.00th=[ 465], 10.00th=[ 465], 20.00th=[ 465], | 30.00th=[ 465], 40.00th=[ 465], 50.00th=[ 465], 60.00th=[131597], | 70.00th=[131597], 80.00th=[131597], 90.00th=[131597], 95.00th=[131597], | 99.00th=[131597], 99.50th=[131597], 99.90th=[131597], 99.95th=[131597], | 99.99th=[131597] write: IOPS=20, BW=20.7MiB/s (21.8MB/s)(10.0MiB/482msec) clat (usec): min=18395, max=59448, avg=34703.15, stdev=13326.98 lat (usec): min=18435, max=59513, avg=34756.66, stdev=13324.13 clat percentiles (usec): | 1.00th=[18482], 5.00th=[18482], 10.00th=[18482], 20.00th=[23462], | 30.00th=[27132], 40.00th=[27919], 50.00th=[30016], 60.00th=[30540], | 70.00th=[46924], 80.00th=[46924], 90.00th=[51643], 95.00th=[59507], | 99.00th=[59507], 99.50th=[59507], 99.90th=[59507], 99.95th=[59507], | 99.99th=[59507] lat (usec) : 500=8.33% lat (msec) : 20=8.33%, 50=58.33%, 100=16.67%, 250=8.33% cpu : usr=0.00%, sys=56.76%, ctx=112, majf=0, minf=33 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=2,10,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=16 rand-rw: (groupid=0, jobs=1): err= 0: pid=15932: Wed Apr 17 23:44:44 2024 read: IOPS=16, BW=16.6MiB/s (17.4MB/s)(6144KiB/362msec) clat (usec): min=327, max=55803, avg=17109.86, stdev=24602.47 lat (usec): min=327, max=55804, avg=17110.54, stdev=24602.48 clat percentiles (usec): | 1.00th=[ 326], 5.00th=[ 326], 10.00th=[ 326], 20.00th=[ 429], | 30.00th=[ 429], 40.00th=[ 1614], 50.00th=[ 1614], 60.00th=[ 3851], | 70.00th=[40633], 80.00th=[40633], 90.00th=[55837], 95.00th=[55837], | 99.00th=[55837], 99.50th=[55837], 99.90th=[55837], 99.95th=[55837], | 99.99th=[55837] write: IOPS=16, BW=16.6MiB/s (17.4MB/s)(6144KiB/362msec) clat (usec): min=24423, max=81554, avg=42470.28, stdev=20647.22 lat (usec): min=24441, max=81590, avg=42512.19, stdev=20649.44 clat percentiles (usec): | 1.00th=[24511], 5.00th=[24511], 10.00th=[24511], 20.00th=[31589], | 30.00th=[31589], 40.00th=[32113], 50.00th=[32113], 60.00th=[37487], | 70.00th=[47973], 80.00th=[47973], 90.00th=[81265], 95.00th=[81265], | 99.00th=[81265], 99.50th=[81265], 99.90th=[81265], 99.95th=[81265], | 99.99th=[81265] lat (usec) : 500=16.67% lat (msec) : 2=8.33%, 4=8.33%, 50=50.00%, 100=16.67% cpu : usr=0.00%, sys=46.54%, ctx=68, majf=0, minf=33 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=6,6,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=16 rand-rw: (groupid=0, jobs=1): err= 0: pid=15933: Wed Apr 17 23:44:44 2024 read: IOPS=11, BW=11.3MiB/s (11.9MB/s)(5120KiB/442msec) clat (usec): min=335, max=96677, avg=26679.03, stdev=40615.40 lat (usec): min=335, max=96678, avg=26679.64, stdev=40615.40 clat percentiles (usec): | 1.00th=[ 334], 5.00th=[ 334], 10.00th=[ 334], 20.00th=[ 334], | 30.00th=[ 392], 40.00th=[ 392], 50.00th=[ 8979], 60.00th=[ 8979], | 70.00th=[27132], 80.00th=[27132], 90.00th=[96994], 95.00th=[96994], | 99.00th=[96994], 99.50th=[96994], 99.90th=[96994], 99.95th=[96994], | 99.99th=[96994] write: IOPS=15, BW=15.8MiB/s (16.6MB/s)(7168KiB/442msec) clat (usec): min=23227, max=72559, avg=43520.24, stdev=22118.30 lat (usec): min=23286, max=72581, avg=43558.10, stdev=22104.20 clat percentiles (usec): | 1.00th=[23200], 5.00th=[23200], 10.00th=[23200], 20.00th=[23725], | 30.00th=[26346], 40.00th=[26346], 50.00th=[31065], 60.00th=[63701], | 70.00th=[63701], 80.00th=[64226], 90.00th=[72877], 95.00th=[72877], | 99.00th=[72877], 99.50th=[72877], 99.90th=[72877], 99.95th=[72877], | 99.99th=[72877] lat (usec) : 500=16.67% lat (msec) : 10=8.33%, 50=41.67%, 100=33.33% cpu : usr=0.00%, sys=42.18%, ctx=156, majf=0, minf=33 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=5,7,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=16 Run status group 0 (all jobs): READ: bw=14.7MiB/s (15.4MB/s), 4249KiB/s-16.6MiB/s (4351kB/s-17.4MB/s), io=20.0MiB (20.0MB), run=362-1364msec WRITE: bw=20.5MiB/s (21.5MB/s), 3754KiB/s-20.7MiB/s (3844kB/s-21.8MB/s), io=28.0MiB (29.4MB), run=362-1364msec rand-rw: (groupid=0, jobs=1): err= 0: pid=15934: Wed Apr 17 23:44:44 2024 read: IOPS=4, BW=5091KiB/s (5213kB/s)(7168KiB/1408msec) clat (msec): min=17, max=640, avg=170.88, stdev=226.07 lat (msec): min=17, max=640, avg=170.89, stdev=226.07 clat percentiles (msec): | 1.00th=[ 18], 5.00th=[ 18], 10.00th=[ 18], 20.00th=[ 41], | 30.00th=[ 44], 40.00th=[ 44], 50.00th=[ 84], 60.00th=[ 85], | 70.00th=[ 85], 80.00th=[ 288], 90.00th=[ 642], 95.00th=[ 642], | 99.00th=[ 642], 99.50th=[ 642], 99.90th=[ 642], 99.95th=[ 642], | 99.99th=[ 642] bw ( KiB/s): min= 5657, max= 6131, per=43.54%, avg=5894.00, stdev=335.17, samples=2 iops : min= 5, max= 5, avg= 5.00, stdev= 0.00, samples=2 write: IOPS=3, BW=3636KiB/s (3724kB/s)(5120KiB/1408msec) clat (usec): min=24761, max=55673, avg=36215.55, stdev=13792.84 lat (usec): min=24836, max=55740, avg=36272.27, stdev=13795.46 clat percentiles (usec): | 1.00th=[24773], 5.00th=[24773], 10.00th=[24773], 20.00th=[24773], | 30.00th=[25035], 40.00th=[25035], 50.00th=[30016], 60.00th=[30016], | 70.00th=[45351], 80.00th=[45351], 90.00th=[55837], 95.00th=[55837], | 99.00th=[55837], 99.50th=[55837], 99.90th=[55837], 99.95th=[55837], | 99.99th=[55837] bw ( KiB/s): min= 2043, max= 7543, per=25.29%, avg=4793.00, stdev=3889.09, samples=2 iops : min= 1, max= 7, avg= 4.00, stdev= 4.24, samples=2 lat (msec) : 20=8.33%, 50=50.00%, 100=25.00%, 500=8.33%, 750=8.33% cpu : usr=0.00%, sys=8.96%, ctx=60, majf=0, minf=34 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=7,5,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=16 rand-rw: (groupid=0, jobs=1): err= 0: pid=15935: Wed Apr 17 23:44:44 2024 read: IOPS=1, BW=1513KiB/s (1549kB/s)(2048KiB/1354msec) clat (usec): min=18182, max=91533, avg=54857.80, stdev=51866.70 lat (usec): min=18183, max=91534, avg=54858.92, stdev=51867.43 clat percentiles (usec): | 1.00th=[18220], 5.00th=[18220], 10.00th=[18220], 20.00th=[18220], | 30.00th=[18220], 40.00th=[18220], 50.00th=[18220], 60.00th=[91751], | 70.00th=[91751], 80.00th=[91751], 90.00th=[91751], 95.00th=[91751], | 99.00th=[91751], 99.50th=[91751], 99.90th=[91751], 99.95th=[91751], | 99.99th=[91751] bw ( KiB/s): min= 4087, max= 4087, per=30.19%, avg=4087.00, stdev= 0.00, samples=1 iops : min= 3, max= 3, avg= 3.00, stdev= 0.00, samples=1 write: IOPS=7, BW=7563KiB/s (7744kB/s)(10.0MiB/1354msec) clat (msec): min=19, max=740, avg=124.27, stdev=218.82 lat (msec): min=19, max=740, avg=124.31, stdev=218.85 clat percentiles (msec): | 1.00th=[ 20], 5.00th=[ 20], 10.00th=[ 20], 20.00th=[ 22], | 30.00th=[ 25], 40.00th=[ 41], 50.00th=[ 50], 60.00th=[ 61], | 70.00th=[ 94], 80.00th=[ 94], 90.00th=[ 105], 95.00th=[ 743], | 99.00th=[ 743], 99.50th=[ 743], 99.90th=[ 743], 99.95th=[ 743], | 99.99th=[ 743] bw ( KiB/s): min= 7460, max=10219, per=46.65%, avg=8839.50, stdev=1950.91, samples=2 iops : min= 7, max= 9, avg= 8.00, stdev= 1.41, samples=2 lat (msec) : 20=16.67%, 50=33.33%, 100=33.33%, 250=8.33%, 750=8.33% cpu : usr=0.00%, sys=12.12%, ctx=80, majf=0, minf=32 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=2,10,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=16 rand-rw: (groupid=0, jobs=1): err= 0: pid=15936: Wed Apr 17 23:44:44 2024 read: IOPS=7, BW=7670KiB/s (7855kB/s)(6144KiB/801msec) clat (usec): min=18235, max=66107, avg=34590.72, stdev=18721.23 lat (usec): min=18235, max=66108, avg=34591.72, stdev=18721.41 clat percentiles (usec): | 1.00th=[18220], 5.00th=[18220], 10.00th=[18220], 20.00th=[21627], | 30.00th=[21627], 40.00th=[23725], 50.00th=[23725], 60.00th=[29754], | 70.00th=[47973], 80.00th=[47973], 90.00th=[66323], 95.00th=[66323], | 99.00th=[66323], 99.50th=[66323], 99.90th=[66323], 99.95th=[66323], | 99.99th=[66323] bw ( KiB/s): min= 2048, max= 2048, per=15.13%, avg=2048.00, stdev= 0.00, samples=1 iops : min= 2, max= 2, avg= 2.00, stdev= 0.00, samples=1 write: IOPS=7, BW=7670KiB/s (7855kB/s)(6144KiB/801msec) clat (msec): min=23, max=271, avg=90.41, stdev=90.40 lat (msec): min=23, max=271, avg=90.45, stdev=90.40 clat percentiles (msec): | 1.00th=[ 24], 5.00th=[ 24], 10.00th=[ 24], 20.00th=[ 52], | 30.00th=[ 52], 40.00th=[ 55], 50.00th=[ 55], 60.00th=[ 68], | 70.00th=[ 74], 80.00th=[ 74], 90.00th=[ 271], 95.00th=[ 271], | 99.00th=[ 271], 99.50th=[ 271], 99.90th=[ 271], 99.95th=[ 271], | 99.99th=[ 271] bw ( KiB/s): min= 6144, max= 6144, per=32.42%, avg=6144.00, stdev= 0.00, samples=1 iops : min= 6, max= 6, avg= 6.00, stdev= 0.00, samples=1 lat (msec) : 20=8.33%, 50=41.67%, 100=41.67%, 500=8.33% cpu : usr=0.00%, sys=16.12%, ctx=45, majf=0, minf=32 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=6,6,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=16 rand-rw: (groupid=0, jobs=1): err= 0: pid=15937: Wed Apr 17 23:44:44 2024 read: IOPS=3, BW=3384KiB/s (3465kB/s)(5120KiB/1513msec) clat (msec): min=24, max=766, avg=178.29, stdev=328.65 lat (msec): min=24, max=766, avg=178.29, stdev=328.65 clat percentiles (msec): | 1.00th=[ 25], 5.00th=[ 25], 10.00th=[ 25], 20.00th=[ 25], | 30.00th=[ 27], 40.00th=[ 27], 50.00th=[ 33], 60.00th=[ 33], | 70.00th=[ 42], 80.00th=[ 42], 90.00th=[ 768], 95.00th=[ 768], | 99.00th=[ 768], 99.50th=[ 768], 99.90th=[ 768], 99.95th=[ 768], | 99.99th=[ 768] bw ( KiB/s): min= 2048, max= 6144, per=30.26%, avg=4096.00, stdev=2896.31, samples=2 iops : min= 2, max= 6, avg= 4.00, stdev= 2.83, samples=2 write: IOPS=4, BW=4738KiB/s (4851kB/s)(7168KiB/1513msec) clat (msec): min=28, max=286, avg=85.94, stdev=93.76 lat (msec): min=28, max=287, avg=85.99, stdev=93.76 clat percentiles (msec): | 1.00th=[ 29], 5.00th=[ 29], 10.00th=[ 29], 20.00th=[ 29], | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 37], 60.00th=[ 93], | 70.00th=[ 93], 80.00th=[ 99], 90.00th=[ 288], 95.00th=[ 288], | 99.00th=[ 288], 99.50th=[ 288], 99.90th=[ 288], 99.95th=[ 288], | 99.99th=[ 288] bw ( KiB/s): min= 2048, max= 6144, per=25.22%, avg=4778.67, stdev=2364.83, samples=3 iops : min= 2, max= 6, avg= 4.67, stdev= 2.31, samples=3 lat (msec) : 50=66.67%, 100=16.67%, 500=8.33%, 1000=8.33% cpu : usr=0.00%, sys=8.20%, ctx=52, majf=0, minf=30 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=5,7,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=16 Run status group 0 (all jobs): READ: bw=13.2MiB/s (13.9MB/s), 1513KiB/s-7670KiB/s (1549kB/s-7855kB/s), io=20.0MiB (20.0MB), run=801-1513msec WRITE: bw=18.5MiB/s (19.4MB/s), 3636KiB/s-7670KiB/s (3724kB/s-7855kB/s), io=28.0MiB (29.4MB), run=801-1513msec mix direct rw 4194304 by fio with 4 jobs... mix buffer rw 4194304 by fio with 4 jobs... rand-rw: (g=0): rw=randrw, bs=(R) 4096KiB-4096KiB, (W) 4096KiB-4096KiB, (T) 4096KiB-4096KiB, ioengine=psync, iodepth=16 ... fio-3.7 Starting 4 processes rand-rw: (g=0): rw=randrw, bs=(R) 4096KiB-4096KiB, (W) 4096KiB-4096KiB, (T) 4096KiB-4096KiB, ioengine=psync, iodepth=16 ... fio-3.7 Starting 4 processes rand-rw: (groupid=0, jobs=1): err= 0: pid=15943: Wed Apr 17 23:44:50 2024 read: IOPS=0, BW=2439KiB/s (2497kB/s)(12.0MiB/5039msec) clat (msec): min=115, max=3323, avg=1677.11, stdev=1605.84 lat (msec): min=115, max=3323, avg=1677.11, stdev=1605.84 clat percentiles (msec): | 1.00th=[ 116], 5.00th=[ 116], 10.00th=[ 116], 20.00th=[ 116], | 30.00th=[ 116], 40.00th=[ 1586], 50.00th=[ 1586], 60.00th=[ 1586], | 70.00th=[ 3339], 80.00th=[ 3339], 90.00th=[ 3339], 95.00th=[ 3339], | 99.00th=[ 3339], 99.50th=[ 3339], 99.90th=[ 3339], 99.95th=[ 3339], | 99.99th=[ 3339] bw ( KiB/s): min= 8192, max= 8192, per=100.00%, avg=8192.00, stdev= 0.00, samples=2 iops : min= 2, max= 2, avg= 2.00, stdev= 0.00, samples=2 lat (msec) : 250=33.33% cpu : usr=0.00%, sys=35.11%, ctx=1310, majf=0, minf=31 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=3,0,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=16 rand-rw: (groupid=0, jobs=1): err= 0: pid=15944: Wed Apr 17 23:44:50 2024 read: IOPS=0, BW=721KiB/s (738kB/s)(4096KiB/5681msec) clat (nsec): min=141682k, max=141682k, avg=141682388.00, stdev= 0.00 lat (nsec): min=141684k, max=141684k, avg=141683804.00, stdev= 0.00 clat percentiles (msec): | 1.00th=[ 142], 5.00th=[ 142], 10.00th=[ 142], 20.00th=[ 142], | 30.00th=[ 142], 40.00th=[ 142], 50.00th=[ 142], 60.00th=[ 142], | 70.00th=[ 142], 80.00th=[ 142], 90.00th=[ 142], 95.00th=[ 142], | 99.00th=[ 142], 99.50th=[ 142], 99.90th=[ 142], 99.95th=[ 142], | 99.99th=[ 142] bw ( KiB/s): min= 8192, max= 8192, per=100.00%, avg=8192.00, stdev= 0.00, samples=1 iops : min= 2, max= 2, avg= 2.00, stdev= 0.00, samples=1 write: IOPS=0, BW=1442KiB/s (1477kB/s)(8192KiB/5681msec) clat (msec): min=730, max=4808, avg=2769.14, stdev=2883.52 lat (msec): min=730, max=4808, avg=2769.36, stdev=2883.69 clat percentiles (msec): | 1.00th=[ 735], 5.00th=[ 735], 10.00th=[ 735], 20.00th=[ 735], | 30.00th=[ 735], 40.00th=[ 735], 50.00th=[ 735], 60.00th=[ 4799], | 70.00th=[ 4799], 80.00th=[ 4799], 90.00th=[ 4799], 95.00th=[ 4799], | 99.00th=[ 4799], 99.50th=[ 4799], 99.90th=[ 4799], 99.95th=[ 4799], | 99.99th=[ 4799] bw ( KiB/s): min= 8192, max= 8192, per=100.00%, avg=8192.00, stdev= 0.00, samples=1 iops : min= 2, max= 2, avg= 2.00, stdev= 0.00, samples=1 lat (msec) : 250=33.33%, 750=33.33% cpu : usr=0.00%, sys=5.49%, ctx=647, majf=0, minf=31 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=1,2,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=16 rand-rw: (groupid=0, jobs=1): err= 0: pid=15945: Wed Apr 17 23:44:50 2024 write: IOPS=0, BW=2324KiB/s (2380kB/s)(12.0MiB/5288msec) clat (msec): min=319, max=3141, avg=1760.92, stdev=1412.03 lat (msec): min=319, max=3141, avg=1761.13, stdev=1412.01 clat percentiles (msec): | 1.00th=[ 321], 5.00th=[ 321], 10.00th=[ 321], 20.00th=[ 321], | 30.00th=[ 321], 40.00th=[ 1821], 50.00th=[ 1821], 60.00th=[ 1821], | 70.00th=[ 3138], 80.00th=[ 3138], 90.00th=[ 3138], 95.00th=[ 3138], | 99.00th=[ 3138], 99.50th=[ 3138], 99.90th=[ 3138], 99.95th=[ 3138], | 99.99th=[ 3138] bw ( KiB/s): min= 8192, max= 8192, per=100.00%, avg=8192.00, stdev= 0.00, samples=2 iops : min= 2, max= 2, avg= 2.00, stdev= 0.00, samples=2 lat (msec) : 500=33.33% cpu : usr=0.02%, sys=8.47%, ctx=480, majf=0, minf=31 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=0,3,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=16 rand-rw: (groupid=0, jobs=1): err= 0: pid=15946: Wed Apr 17 23:44:50 2024 write: IOPS=0, BW=2291KiB/s (2346kB/s)(12.0MiB/5364msec) clat (msec): min=88, max=4673, avg=1782.04, stdev=2515.99 lat (msec): min=88, max=4673, avg=1782.30, stdev=2516.00 clat percentiles (msec): | 1.00th=[ 89], 5.00th=[ 89], 10.00th=[ 89], 20.00th=[ 89], | 30.00th=[ 89], 40.00th=[ 584], 50.00th=[ 584], 60.00th=[ 584], | 70.00th=[ 4665], 80.00th=[ 4665], 90.00th=[ 4665], 95.00th=[ 4665], | 99.00th=[ 4665], 99.50th=[ 4665], 99.90th=[ 4665], 99.95th=[ 4665], | 99.99th=[ 4665] bw ( KiB/s): min= 8192, max= 8192, per=100.00%, avg=8192.00, stdev= 0.00, samples=1 iops : min= 2, max= 2, avg= 2.00, stdev= 0.00, samples=1 lat (msec) : 100=33.33%, 750=33.33% cpu : usr=0.00%, sys=5.31%, ctx=49, majf=0, minf=30 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=0,3,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=16 Run status group 0 (all jobs): READ: bw=2884KiB/s (2953kB/s), 721KiB/s-2439KiB/s (738kB/s-2497kB/s), io=16.0MiB (16.8MB), run=5039-5681msec WRITE: bw=5768KiB/s (5906kB/s), 1442KiB/s-2324KiB/s (1477kB/s-2380kB/s), io=32.0MiB (33.6MB), run=5288-5681msec rand-rw: (groupid=0, jobs=1): err= 0: pid=15948: Wed Apr 17 23:44:51 2024 read: IOPS=0, BW=2227KiB/s (2281kB/s)(12.0MiB/5517msec) clat (msec): min=52, max=4878, avg=1838.67, stdev=2645.90 lat (msec): min=52, max=4878, avg=1838.67, stdev=2645.90 clat percentiles (msec): | 1.00th=[ 53], 5.00th=[ 53], 10.00th=[ 53], 20.00th=[ 53], | 30.00th=[ 53], 40.00th=[ 584], 50.00th=[ 584], 60.00th=[ 584], | 70.00th=[ 4866], 80.00th=[ 4866], 90.00th=[ 4866], 95.00th=[ 4866], | 99.00th=[ 4866], 99.50th=[ 4866], 99.90th=[ 4866], 99.95th=[ 4866], | 99.99th=[ 4866] bw ( KiB/s): min= 8175, max= 8175, per=100.00%, avg=8175.00, stdev= 0.00, samples=1 iops : min= 1, max= 1, avg= 1.00, stdev= 0.00, samples=1 lat (msec) : 100=33.33%, 750=33.33% cpu : usr=0.00%, sys=1.94%, ctx=46, majf=0, minf=32 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=3,0,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=16 rand-rw: (groupid=0, jobs=1): err= 0: pid=15949: Wed Apr 17 23:44:51 2024 read: IOPS=0, BW=716KiB/s (733kB/s)(4096KiB/5722msec) clat (nsec): min=539794k, max=539794k, avg=539793658.00, stdev= 0.00 lat (nsec): min=539795k, max=539795k, avg=539795252.00, stdev= 0.00 clat percentiles (msec): | 1.00th=[ 542], 5.00th=[ 542], 10.00th=[ 542], 20.00th=[ 542], | 30.00th=[ 542], 40.00th=[ 542], 50.00th=[ 542], 60.00th=[ 542], | 70.00th=[ 542], 80.00th=[ 542], 90.00th=[ 542], 95.00th=[ 542], | 99.00th=[ 542], 99.50th=[ 542], 99.90th=[ 542], 99.95th=[ 542], | 99.99th=[ 542] bw ( KiB/s): min= 8192, max= 8192, per=100.00%, avg=8192.00, stdev= 0.00, samples=1 iops : min= 2, max= 2, avg= 2.00, stdev= 0.00, samples=1 write: IOPS=0, BW=1432KiB/s (1466kB/s)(8192KiB/5722msec) clat (msec): min=396, max=4774, avg=2585.38, stdev=3096.14 lat (msec): min=396, max=4774, avg=2585.61, stdev=3095.97 clat percentiles (msec): | 1.00th=[ 397], 5.00th=[ 397], 10.00th=[ 397], 20.00th=[ 397], | 30.00th=[ 397], 40.00th=[ 397], 50.00th=[ 397], 60.00th=[ 4799], | 70.00th=[ 4799], 80.00th=[ 4799], 90.00th=[ 4799], 95.00th=[ 4799], | 99.00th=[ 4799], 99.50th=[ 4799], 99.90th=[ 4799], 99.95th=[ 4799], | 99.99th=[ 4799] bw ( KiB/s): min= 8192, max= 8192, per=100.00%, avg=8192.00, stdev= 0.00, samples=1 iops : min= 2, max= 2, avg= 2.00, stdev= 0.00, samples=1 lat (msec) : 500=33.33%, 750=33.33% cpu : usr=0.02%, sys=2.57%, ctx=175, majf=0, minf=33 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=1,2,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=16 rand-rw: (groupid=0, jobs=1): err= 0: pid=15950: Wed Apr 17 23:44:51 2024 write: IOPS=0, BW=2233KiB/s (2287kB/s)(12.0MiB/5502msec) clat (msec): min=502, max=3824, avg=1831.86, stdev=1757.51 lat (msec): min=502, max=3824, avg=1832.08, stdev=1757.59 clat percentiles (msec): | 1.00th=[ 502], 5.00th=[ 502], 10.00th=[ 502], 20.00th=[ 502], | 30.00th=[ 502], 40.00th=[ 1167], 50.00th=[ 1167], 60.00th=[ 1167], | 70.00th=[ 3809], 80.00th=[ 3809], 90.00th=[ 3809], 95.00th=[ 3809], | 99.00th=[ 3809], 99.50th=[ 3809], 99.90th=[ 3809], 99.95th=[ 3809], | 99.99th=[ 3809] bw ( KiB/s): min= 8192, max= 8192, per=100.00%, avg=8192.00, stdev= 0.00, samples=2 iops : min= 2, max= 2, avg= 2.00, stdev= 0.00, samples=2 lat (msec) : 750=33.33% cpu : usr=0.02%, sys=4.07%, ctx=106, majf=0, minf=29 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=0,3,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=16 rand-rw: (groupid=0, jobs=1): err= 0: pid=15951: Wed Apr 17 23:44:51 2024 write: IOPS=0, BW=2196KiB/s (2249kB/s)(12.0MiB/5595msec) clat (msec): min=67, max=4849, avg=1860.51, stdev=2605.51 lat (msec): min=67, max=4849, avg=1860.77, stdev=2605.53 clat percentiles (msec): | 1.00th=[ 68], 5.00th=[ 68], 10.00th=[ 68], 20.00th=[ 68], | 30.00th=[ 68], 40.00th=[ 667], 50.00th=[ 667], 60.00th=[ 667], | 70.00th=[ 4866], 80.00th=[ 4866], 90.00th=[ 4866], 95.00th=[ 4866], | 99.00th=[ 4866], 99.50th=[ 4866], 99.90th=[ 4866], 99.95th=[ 4866], | 99.99th=[ 4866] bw ( KiB/s): min= 8192, max= 8192, per=100.00%, avg=8192.00, stdev= 0.00, samples=1 iops : min= 2, max= 2, avg= 2.00, stdev= 0.00, samples=1 lat (msec) : 100=33.33%, 750=33.33% cpu : usr=0.04%, sys=2.41%, ctx=43, majf=0, minf=30 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=0,3,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=16 Run status group 0 (all jobs): READ: bw=2863KiB/s (2932kB/s), 716KiB/s-2227KiB/s (733kB/s-2281kB/s), io=16.0MiB (16.8MB), run=5517-5722msec WRITE: bw=5727KiB/s (5864kB/s), 1432KiB/s-2233KiB/s (1466kB/s-2287kB/s), io=32.0MiB (33.6MB), run=5502-5722msec PASS 398b (37s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 398c: run fio to test AIO ================= 23:44:54 (1713411894) /usr/bin/fio debug=0 40+0 records in 40+0 records out 41943040 bytes (42 MB) copied, 0.0738094 s, 568 MB/s osc.lustre-OST0000-osc-ffff88012a451000.rpc_stats=clear writing 40M to OST0 by fio with 4 jobs... rand-write: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=16 ... fio-3.7 Starting 4 processes rand-write: (groupid=0, jobs=1): err= 0: pid=16729: Wed Apr 17 23:44:59 2024 write: IOPS=772, BW=3092KiB/s (3166kB/s)(10.0MiB/3312msec) slat (usec): min=10, max=296, avg=43.79, stdev=25.03 clat (usec): min=2408, max=36661, avg=20556.52, stdev=2345.61 lat (usec): min=2460, max=36704, avg=20601.38, stdev=2347.83 clat percentiles (usec): | 1.00th=[11731], 5.00th=[16712], 10.00th=[18220], 20.00th=[19268], | 30.00th=[20055], 40.00th=[20579], 50.00th=[20841], 60.00th=[21365], | 70.00th=[21627], 80.00th=[22152], 90.00th=[22676], 95.00th=[23200], | 99.00th=[25035], 99.50th=[25822], 99.90th=[34341], 99.95th=[34341], | 99.99th=[36439] bw ( KiB/s): min= 2944, max= 3344, per=25.01%, avg=3093.33, stdev=141.91, samples=6 iops : min= 736, max= 836, avg=773.33, stdev=35.48, samples=6 lat (msec) : 4=0.04%, 10=0.12%, 20=30.31%, 50=69.53% cpu : usr=0.69%, sys=3.93%, ctx=1929, majf=0, minf=31 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=99.4%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=0,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=16 rand-write: (groupid=0, jobs=1): err= 0: pid=16730: Wed Apr 17 23:44:59 2024 write: IOPS=772, BW=3092KiB/s (3166kB/s)(10.0MiB/3312msec) slat (usec): min=10, max=259, avg=38.37, stdev=22.90 clat (usec): min=11398, max=25848, avg=20564.40, stdev=2221.45 lat (usec): min=11415, max=25885, avg=20603.78, stdev=2223.07 clat percentiles (usec): | 1.00th=[12125], 5.00th=[16712], 10.00th=[18220], 20.00th=[19268], | 30.00th=[20055], 40.00th=[20579], 50.00th=[20841], 60.00th=[21365], | 70.00th=[21627], 80.00th=[22152], 90.00th=[22676], 95.00th=[23200], | 99.00th=[24511], 99.50th=[25035], 99.90th=[25560], 99.95th=[25560], | 99.99th=[25822] bw ( KiB/s): min= 2944, max= 3328, per=24.98%, avg=3089.33, stdev=140.80, samples=6 iops : min= 736, max= 832, avg=772.33, stdev=35.20, samples=6 lat (msec) : 20=30.16%, 50=69.84% cpu : usr=0.51%, sys=3.50%, ctx=1369, majf=0, minf=30 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=99.4%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=0,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=16 rand-write: (groupid=0, jobs=1): err= 0: pid=16731: Wed Apr 17 23:44:59 2024 write: IOPS=773, BW=3094KiB/s (3168kB/s)(10.0MiB/3310msec) slat (usec): min=11, max=273, avg=40.42, stdev=22.39 clat (usec): min=2313, max=36453, avg=20548.23, stdev=2414.93 lat (usec): min=2358, max=36480, avg=20589.68, stdev=2416.22 clat percentiles (usec): | 1.00th=[11994], 5.00th=[16450], 10.00th=[18220], 20.00th=[19268], | 30.00th=[19792], 40.00th=[20579], 50.00th=[20841], 60.00th=[21365], | 70.00th=[21627], 80.00th=[22152], 90.00th=[22676], 95.00th=[23200], | 99.00th=[24511], 99.50th=[25560], 99.90th=[36439], 99.95th=[36439], | 99.99th=[36439] bw ( KiB/s): min= 2944, max= 3360, per=25.02%, avg=3094.67, stdev=148.76, samples=6 iops : min= 736, max= 840, avg=773.67, stdev=37.19, samples=6 lat (msec) : 4=0.08%, 10=0.12%, 20=30.74%, 50=69.06% cpu : usr=0.66%, sys=3.75%, ctx=1647, majf=0, minf=31 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=99.4%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=0,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=16 rand-write: (groupid=0, jobs=1): err= 0: pid=16732: Wed Apr 17 23:44:59 2024 write: IOPS=774, BW=3099KiB/s (3174kB/s)(10.0MiB/3304msec) slat (usec): min=11, max=202, avg=41.01, stdev=23.32 clat (usec): min=3154, max=41143, avg=20518.57, stdev=2758.23 lat (usec): min=3210, max=41159, avg=20560.66, stdev=2760.10 clat percentiles (usec): | 1.00th=[ 8094], 5.00th=[16057], 10.00th=[18220], 20.00th=[19268], | 30.00th=[20055], 40.00th=[20579], 50.00th=[20841], 60.00th=[21365], | 70.00th=[21627], 80.00th=[22152], 90.00th=[22676], 95.00th=[22938], | 99.00th=[25035], 99.50th=[25822], 99.90th=[41157], 99.95th=[41157], | 99.99th=[41157] bw ( KiB/s): min= 2968, max= 3440, per=25.13%, avg=3108.00, stdev=172.97, samples=6 iops : min= 742, max= 860, avg=777.00, stdev=43.24, samples=6 lat (msec) : 4=0.08%, 10=1.05%, 20=29.45%, 50=69.41% cpu : usr=0.64%, sys=3.63%, ctx=1518, majf=0, minf=28 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=99.4%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=0,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=16 Run status group 0 (all jobs): WRITE: bw=12.1MiB/s (12.7MB/s), 3092KiB/s-3099KiB/s (3166kB/s-3174kB/s), io=40.0MiB (41.9MB), run=3304-3312msec mix rw 40M to OST0 by fio with 4 jobs... rand-rw: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=16 ... fio-3.7 Starting 4 processes rand-rw: (groupid=0, jobs=1): err= 0: pid=16745: Wed Apr 17 23:45:02 2024 read: IOPS=485, BW=1943KiB/s (1990kB/s)(5048KiB/2598msec) slat (usec): min=20, max=2253, avg=1103.50, stdev=222.73 clat (usec): min=1976, max=25568, avg=15388.46, stdev=2031.23 lat (usec): min=2012, max=26629, avg=16493.57, stdev=2035.09 clat percentiles (usec): | 1.00th=[11207], 5.00th=[12256], 10.00th=[12780], 20.00th=[13566], | 30.00th=[14222], 40.00th=[14877], 50.00th=[15401], 60.00th=[16057], | 70.00th=[16450], 80.00th=[17171], 90.00th=[17957], 95.00th=[18482], | 99.00th=[19530], 99.50th=[20055], 99.90th=[24249], 99.95th=[25560], | 99.99th=[25560] bw ( KiB/s): min= 1856, max= 2032, per=25.23%, avg=1947.20, stdev=63.90, samples=5 iops : min= 464, max= 508, avg=486.80, stdev=15.97, samples=5 write: IOPS=499, BW=1998KiB/s (2046kB/s)(5192KiB/2598msec) slat (usec): min=11, max=933, avg=38.61, stdev=30.88 clat (usec): min=3097, max=24302, avg=15916.05, stdev=1638.67 lat (usec): min=3126, max=24336, avg=15955.67, stdev=1637.81 clat percentiles (usec): | 1.00th=[12125], 5.00th=[13435], 10.00th=[13960], 20.00th=[14615], | 30.00th=[15139], 40.00th=[15401], 50.00th=[15926], 60.00th=[16188], | 70.00th=[16712], 80.00th=[17171], 90.00th=[17957], 95.00th=[18744], | 99.00th=[19792], 99.50th=[20317], 99.90th=[23462], 99.95th=[24249], | 99.99th=[24249] bw ( KiB/s): min= 1848, max= 2128, per=24.85%, avg=1996.80, stdev=105.16, samples=5 iops : min= 462, max= 532, avg=499.20, stdev=26.29, samples=5 lat (msec) : 2=0.04%, 4=0.04%, 10=0.12%, 20=99.26%, 50=0.55% cpu : usr=1.23%, sys=6.85%, ctx=2167, majf=0, minf=32 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=99.4%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=1262,1298,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=16 rand-rw: (groupid=0, jobs=1): err= 0: pid=16746: Wed Apr 17 23:45:02 2024 read: IOPS=479, BW=1918KiB/s (1964kB/s)(4976KiB/2595msec) slat (usec): min=14, max=1878, avg=1093.62, stdev=209.45 clat (usec): min=8924, max=26199, avg=15394.27, stdev=1955.68 lat (usec): min=8982, max=27125, avg=16489.50, stdev=1967.20 clat percentiles (usec): | 1.00th=[11207], 5.00th=[12387], 10.00th=[13042], 20.00th=[13829], | 30.00th=[14353], 40.00th=[14877], 50.00th=[15401], 60.00th=[15795], | 70.00th=[16319], 80.00th=[16909], 90.00th=[17957], 95.00th=[18744], | 99.00th=[19792], 99.50th=[20579], 99.90th=[23200], 99.95th=[26084], | 99.99th=[26084] bw ( KiB/s): min= 1816, max= 2000, per=24.75%, avg=1910.40, stdev=73.63, samples=5 iops : min= 454, max= 500, avg=477.60, stdev=18.41, samples=5 write: IOPS=507, BW=2029KiB/s (2077kB/s)(5264KiB/2595msec) slat (usec): min=11, max=223, avg=37.73, stdev=17.57 clat (usec): min=6983, max=26164, avg=15899.47, stdev=1557.67 lat (usec): min=7005, max=26193, avg=15938.17, stdev=1557.78 clat percentiles (usec): | 1.00th=[12256], 5.00th=[13435], 10.00th=[13960], 20.00th=[14746], | 30.00th=[15139], 40.00th=[15664], 50.00th=[15926], 60.00th=[16319], | 70.00th=[16712], 80.00th=[17171], 90.00th=[17695], 95.00th=[18220], | 99.00th=[19268], 99.50th=[19530], 99.90th=[21627], 99.95th=[26084], | 99.99th=[26084] bw ( KiB/s): min= 1904, max= 2168, per=25.29%, avg=2032.00, stdev=114.96, samples=5 iops : min= 476, max= 542, avg=508.00, stdev=28.74, samples=5 lat (msec) : 10=0.31%, 20=99.02%, 50=0.66% cpu : usr=0.39%, sys=7.52%, ctx=2118, majf=0, minf=31 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=99.4%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=1244,1316,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=16 rand-rw: (groupid=0, jobs=1): err= 0: pid=16747: Wed Apr 17 23:45:02 2024 read: IOPS=480, BW=1921KiB/s (1967kB/s)(4984KiB/2595msec) slat (usec): min=13, max=1877, avg=1102.84, stdev=216.41 clat (usec): min=2038, max=21840, avg=15439.92, stdev=2094.37 lat (usec): min=2082, max=22738, avg=16544.52, stdev=2103.56 clat percentiles (usec): | 1.00th=[10945], 5.00th=[12125], 10.00th=[12780], 20.00th=[13566], | 30.00th=[14222], 40.00th=[14877], 50.00th=[15401], 60.00th=[16188], | 70.00th=[16712], 80.00th=[17171], 90.00th=[17957], 95.00th=[18744], | 99.00th=[20055], 99.50th=[20579], 99.90th=[21103], 99.95th=[21890], | 99.99th=[21890] bw ( KiB/s): min= 1872, max= 1984, per=25.00%, avg=1929.60, stdev=47.80, samples=5 iops : min= 468, max= 496, avg=482.40, stdev=11.95, samples=5 write: IOPS=506, BW=2025KiB/s (2074kB/s)(5256KiB/2595msec) slat (usec): min=11, max=796, avg=38.61, stdev=26.94 clat (usec): min=4359, max=25546, avg=15838.17, stdev=1690.25 lat (usec): min=4441, max=25604, avg=15877.79, stdev=1688.79 clat percentiles (usec): | 1.00th=[12125], 5.00th=[13304], 10.00th=[13829], 20.00th=[14484], | 30.00th=[15008], 40.00th=[15401], 50.00th=[15795], 60.00th=[16188], | 70.00th=[16581], 80.00th=[17171], 90.00th=[17957], 95.00th=[18482], | 99.00th=[19792], 99.50th=[20579], 99.90th=[25560], 99.95th=[25560], | 99.99th=[25560] bw ( KiB/s): min= 1912, max= 2120, per=25.05%, avg=2012.80, stdev=75.39, samples=5 iops : min= 478, max= 530, avg=503.20, stdev=18.85, samples=5 lat (msec) : 4=0.04%, 10=0.23%, 20=98.75%, 50=0.98% cpu : usr=0.93%, sys=7.17%, ctx=2115, majf=0, minf=30 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=99.4%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=1246,1314,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=16 rand-rw: (groupid=0, jobs=1): err= 0: pid=16748: Wed Apr 17 23:45:02 2024 read: IOPS=486, BW=1946KiB/s (1993kB/s)(5060KiB/2600msec) slat (usec): min=15, max=2942, avg=1103.44, stdev=222.86 clat (usec): min=3975, max=23245, avg=15496.23, stdev=2093.89 lat (usec): min=4972, max=24440, avg=16601.35, stdev=2081.36 clat percentiles (usec): | 1.00th=[10945], 5.00th=[12387], 10.00th=[12780], 20.00th=[13698], | 30.00th=[14353], 40.00th=[14877], 50.00th=[15401], 60.00th=[16057], | 70.00th=[16581], 80.00th=[17171], 90.00th=[17957], 95.00th=[18744], | 99.00th=[21103], 99.50th=[21627], 99.90th=[22938], 99.95th=[23200], | 99.99th=[23200] bw ( KiB/s): min= 1864, max= 2008, per=25.08%, avg=1936.00, stdev=68.12, samples=5 iops : min= 466, max= 502, avg=484.00, stdev=17.03, samples=5 write: IOPS=498, BW=1992KiB/s (2040kB/s)(5180KiB/2600msec) slat (usec): min=11, max=451, avg=37.89, stdev=22.48 clat (usec): min=2316, max=20909, avg=15836.25, stdev=1573.43 lat (usec): min=2357, max=20952, avg=15875.04, stdev=1572.75 clat percentiles (usec): | 1.00th=[12256], 5.00th=[13435], 10.00th=[13960], 20.00th=[14746], | 30.00th=[15139], 40.00th=[15533], 50.00th=[15795], 60.00th=[16188], | 70.00th=[16581], 80.00th=[16909], 90.00th=[17695], 95.00th=[18482], | 99.00th=[19530], 99.50th=[20055], 99.90th=[20579], 99.95th=[20841], | 99.99th=[20841] bw ( KiB/s): min= 1912, max= 2048, per=24.71%, avg=1985.60, stdev=60.24, samples=5 iops : min= 478, max= 512, avg=496.40, stdev=15.06, samples=5 lat (msec) : 4=0.08%, 10=0.27%, 20=98.40%, 50=1.25% cpu : usr=0.77%, sys=7.04%, ctx=2180, majf=0, minf=31 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=99.4%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=1265,1295,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=16 Run status group 0 (all jobs): READ: bw=7718KiB/s (7904kB/s), 1918KiB/s-1946KiB/s (1964kB/s-1993kB/s), io=19.6MiB (20.5MB), run=2595-2600msec WRITE: bw=8035KiB/s (8228kB/s), 1992KiB/s-2029KiB/s (2040kB/s-2077kB/s), io=20.4MiB (21.4MB), run=2595-2600msec AIO with large block size 40M rand-rw: (g=0): rw=randrw, bs=(R) 40.0MiB-40.0MiB, (W) 40.0MiB-40.0MiB, (T) 40.0MiB-40.0MiB, ioengine=libaio, iodepth=16 fio-3.7 Starting 1 process rand-rw: (groupid=0, jobs=1): err= 0: pid=16754: Wed Apr 17 23:45:03 2024 read: IOPS=15, BW=625MiB/s (655MB/s)(40.0MiB/64msec) slat (nsec): min=6596.3k, max=6596.3k, avg=6596261.00, stdev= 0.00 clat (nsec): min=56997k, max=56997k, avg=56997044.00, stdev= 0.00 lat (nsec): min=63603k, max=63603k, avg=63602556.00, stdev= 0.00 clat percentiles (usec): | 1.00th=[56886], 5.00th=[56886], 10.00th=[56886], 20.00th=[56886], | 30.00th=[56886], 40.00th=[56886], 50.00th=[56886], 60.00th=[56886], | 70.00th=[56886], 80.00th=[56886], 90.00th=[56886], 95.00th=[56886], | 99.00th=[56886], 99.50th=[56886], 99.90th=[56886], 99.95th=[56886], | 99.99th=[56886] lat (msec) : 100=100.00% cpu : usr=0.00%, sys=11.11%, ctx=9, majf=0, minf=29 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=1,0,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=16 Run status group 0 (all jobs): READ: bw=625MiB/s (655MB/s), 625MiB/s-625MiB/s (655MB/s-655MB/s), io=40.0MiB (41.9MB), run=64-64msec debug=trace inode super iotrace malloc cache info ioctl neterror net warning buffs other dentry nettrace page dlmtrace error emerg ha rpctrace vfstrace reada mmap config console quota sec lfsck hsm snapshot layout PASS 398c (11s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 398d: run aiocp to verify block size > stripe size ========================================================== 23:45:07 (1713411907) /home/green/git/lustre-release/lustre/tests/aiocp 64+0 records in 64+0 records out 67108864 bytes (67 MB) copied, 1.95967 s, 34.2 MB/s PASS 398d (10s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 398e: O_Direct open cleared by fcntl doesn't cause hang ========================================================== 23:45:20 (1713411920) 1+0 records in 1+0 records out 1234 bytes (1.2 kB) copied, 0.007685 s, 161 kB/s 0+1 records in 0+1 records out 1234 bytes (1.2 kB) copied, 0.0207687 s, 59.4 kB/s PASS 398e (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 398f: verify aio handles ll_direct_rw_pages errors correctly ========================================================== 23:45:24 (1713411924) /home/green/git/lustre-release/lustre/tests/aiocp 64+0 records in 64+0 records out 67108864 bytes (67 MB) copied, 1.52134 s, 44.1 MB/s fail_loc=0x1418 read missed bytes at 0 expected 67108864 got -12 fail_loc=0 Binary files /mnt/lustre/f398f.sanity and /mnt/lustre/f398f.sanity.aio differ PASS 398f (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 398g: verify parallel dio async RPC submission ========================================================== 23:45:31 (1713411931) 1+0 records in 1+0 records out 8388608 bytes (8.4 MB) copied, 0.161123 s, 52.1 MB/s osc.lustre-OST0000-osc-ffff88012a451000.max_pages_per_rpc=1M fail_loc=0x214 fail_val=2 osc.lustre-OST0000-osc-ffff88012a451000.rpc_stats=c osc.lustre-OST0001-osc-ffff88012a451000.rpc_stats=c 1+0 records in 1+0 records out 8388608 bytes (8.4 MB) copied, 2.29171 s, 3.7 MB/s osc.lustre-OST0000-osc-ffff88012a451000.rpc_stats= snapshot_time: 1713411934.844175672 secs.nsecs start_time: 1713411932.502738137 secs.nsecs elapsed_time: 2.341437535 secs.nsecs read RPCs in flight: 0 write RPCs in flight: 0 pending write pages: 0 pending read pages: 0 read write pages per rpc rpcs % cum % | rpcs % cum % 1: 0 0 0 | 0 0 0 2: 0 0 0 | 0 0 0 4: 0 0 0 | 0 0 0 8: 0 0 0 | 0 0 0 16: 0 0 0 | 0 0 0 32: 0 0 0 | 0 0 0 64: 0 0 0 | 0 0 0 128: 0 0 0 | 0 0 0 256: 0 0 0 | 8 100 100 read write rpcs in flight rpcs % cum % | rpcs % cum % 1: 0 0 0 | 1 12 12 2: 0 0 0 | 1 12 25 3: 0 0 0 | 1 12 37 4: 0 0 0 | 1 12 50 5: 0 0 0 | 1 12 62 6: 0 0 0 | 1 12 75 7: 0 0 0 | 1 12 87 8: 0 0 0 | 1 12 100 read write offset rpcs % cum % | rpcs % cum % 0: 0 0 0 | 2 25 25 1: 0 0 0 | 0 0 25 2: 0 0 0 | 0 0 25 4: 0 0 0 | 0 0 25 8: 0 0 0 | 0 0 25 16: 0 0 0 | 0 0 25 32: 0 0 0 | 0 0 25 64: 0 0 0 | 0 0 25 128: 0 0 0 | 0 0 25 256: 0 0 0 | 2 25 50 512: 0 0 0 | 4 50 100 osc.lustre-OST0000-osc-ffff88012a451000.rpc_stats=c osc.lustre-OST0001-osc-ffff88012a451000.rpc_stats=c llite.lustre-ffff88012a451000.parallel_dio=0 1+0 records in 1+0 records out 8388608 bytes (8.4 MB) copied, 16.6405 s, 504 kB/s osc.lustre-OST0000-osc-ffff88012a451000.rpc_stats= snapshot_time: 1713411951.554790163 secs.nsecs start_time: 1713411934.864440690 secs.nsecs elapsed_time: 16.690349473 secs.nsecs read RPCs in flight: 0 write RPCs in flight: 0 pending write pages: 0 pending read pages: 0 read write pages per rpc rpcs % cum % | rpcs % cum % 1: 0 0 0 | 0 0 0 2: 0 0 0 | 0 0 0 4: 0 0 0 | 0 0 0 8: 0 0 0 | 0 0 0 16: 0 0 0 | 0 0 0 32: 0 0 0 | 0 0 0 64: 0 0 0 | 0 0 0 128: 0 0 0 | 0 0 0 256: 0 0 0 | 8 100 100 read write rpcs in flight rpcs % cum % | rpcs % cum % 1: 0 0 0 | 8 100 100 read write offset rpcs % cum % | rpcs % cum % 0: 0 0 0 | 2 25 25 1: 0 0 0 | 0 0 25 2: 0 0 0 | 0 0 25 4: 0 0 0 | 0 0 25 8: 0 0 0 | 0 0 25 16: 0 0 0 | 0 0 25 32: 0 0 0 | 0 0 25 64: 0 0 0 | 0 0 25 128: 0 0 0 | 0 0 25 256: 0 0 0 | 2 25 50 512: 0 0 0 | 4 50 100 llite.lustre-ffff88012a451000.parallel_dio=1 fail_loc=0 PASS 398g (22s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 398h: verify correctness of read & write with i/o size >> stripe size ========================================================== 23:45:55 (1713411955) 8+0 records in 8+0 records out 67108864 bytes (67 MB) copied, 1.15003 s, 58.4 MB/s 8+0 records in 8+0 records out 67108864 bytes (67 MB) copied, 1.32127 s, 50.8 MB/s PASS 398h (8s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 398i: verify parallel dio handles ll_direct_rw_pages errors correctly ========================================================== 23:46:05 (1713411965) 8+0 records in 8+0 records out 67108864 bytes (67 MB) copied, 1.41208 s, 47.5 MB/s fail_loc=0x1418 dd: error reading '/mnt/lustre/f398i.sanity': Cannot allocate memory 0+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0159456 s, 0.0 kB/s diff: /mnt/lustre/f398i.sanity: Cannot allocate memory PASS 398i (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 398j: test parallel dio where stripe size > rpc_size ========================================================== 23:46:12 (1713411972) osc.lustre-OST0000-osc-ffff88012a451000.max_pages_per_rpc=1M 8+0 records in 8+0 records out 67108864 bytes (67 MB) copied, 1.44439 s, 46.5 MB/s 8+0 records in 8+0 records out 67108864 bytes (67 MB) copied, 2.53592 s, 26.5 MB/s PASS 398j (8s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 398k: test enospc on first stripe ========= 23:46:23 (1713411983) Waiting for MDT destroys to complete Waiting for orphan cleanup... osp.lustre-OST0000-osc-MDT0000.old_sync_processed osp.lustre-OST0000-osc-MDT0001.old_sync_processed osp.lustre-OST0001-osc-MDT0000.old_sync_processed osp.lustre-OST0001-osc-MDT0001.old_sync_processed wait 40 secs maximumly for oleg146-server mds-ost sync done. SKIP: sanity test_398k 7185564 > 600000 skipping out-of-space test on OST0 SKIP 398k (13s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 398l: test enospc on intermediate stripe/RPC ========================================================== 23:46:38 (1713411998) Waiting for MDT destroys to complete Waiting for orphan cleanup... osp.lustre-OST0000-osc-MDT0000.old_sync_processed osp.lustre-OST0000-osc-MDT0001.old_sync_processed osp.lustre-OST0001-osc-MDT0000.old_sync_processed osp.lustre-OST0001-osc-MDT0001.old_sync_processed wait 40 secs maximumly for oleg146-server mds-ost sync done. 2+0 records in 2+0 records out 16777216 bytes (17 MB) copied, 0.315457 s, 53.2 MB/s SKIP: sanity test_398l 7167748 > 600000 skipping out-of-space test on OST0 SKIP 398l (6s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 398m: test RPC failures with parallel dio ========================================================== 23:46:46 (1713412006) fail_loc=0x20e fail_val=1 dd: error writing '/mnt/lustre/f398m.sanity': Input/output error 1+0 records in 0+0 records out 0 bytes (0 B) copied, 54.6895 s, 0.0 kB/s fail_loc=0 fail_val=0 8+0 records in 8+0 records out 67108864 bytes (67 MB) copied, 1.38324 s, 48.5 MB/s fail_loc=0x20f fail_val=1 dd: error reading '/mnt/lustre/f398m.sanity': Input/output error 0+0 records in 0+0 records out 0 bytes (0 B) copied, 55.0937 s, 0.0 kB/s fail_loc=0 fail_val=0 fail_loc=0x20e fail_val=2 dd: error writing '/mnt/lustre/f398m.sanity': Input/output error 1+0 records in 0+0 records out 0 bytes (0 B) copied, 55.2073 s, 0.0 kB/s fail_loc=0 fail_val=0 8+0 records in 8+0 records out 67108864 bytes (67 MB) copied, 1.36483 s, 49.2 MB/s fail_loc=0x20f fail_val=2 dd: error reading '/mnt/lustre/f398m.sanity': Input/output error 0+0 records in 0+0 records out 0 bytes (0 B) copied, 55.0446 s, 0.0 kB/s fail_loc=0 fail_val=0 fail_loc=0 fail_loc=0 PASS 398m (229s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 398n: test append with parallel DIO ======= 23:50:38 (1713412238) 8+0 records in 8+0 records out 67108864 bytes (67 MB) copied, 2.04846 s, 32.8 MB/s 8+0 records in 8+0 records out 67108864 bytes (67 MB) copied, 1.28237 s, 52.3 MB/s PASS 398n (8s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 398o: right kms with DIO ================== 23:50:48 (1713412248) directio on /mnt/lustre/f398o.sanity for 1x1 bytes PASS PASS 398o (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 398p: race aio with buffered i/o ========== 23:50:53 (1713412253) /home/green/git/lustre-release/lustre/tests/aiocp 1+0 records in 1+0 records out 26214400 bytes (26 MB) copied, 0.834261 s, 31.4 MB/s bs: 4096, file_size 26214400 3200+0 records in 3200+0 records out 26214400 bytes (26 MB) copied, 3.85159 s, 6.8 MB/s /mnt/lustre/f398p.sanity.2 has type file OK /mnt/lustre/f398p.sanity.2 has size 26214400 OK bs: 16384, file_size 26214400 800+0 records in 800+0 records out 26214400 bytes (26 MB) copied, 1.2716 s, 20.6 MB/s /mnt/lustre/f398p.sanity.2 has type file OK /mnt/lustre/f398p.sanity.2 has size 26214400 OK bs: 1048576, file_size 26214400 12+1 records in 12+1 records out 26214400 bytes (26 MB) copied, 0.826589 s, 31.7 MB/s /mnt/lustre/f398p.sanity.2 has type file OK /mnt/lustre/f398p.sanity.2 has size 26214400 OK bs: 4194304, file_size 26214400 3+1 records in 3+1 records out 26214400 bytes (26 MB) copied, 0.819719 s, 32.0 MB/s /mnt/lustre/f398p.sanity.2 has type file OK /mnt/lustre/f398p.sanity.2 has size 26214400 OK PASS 398p (19s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 398q: race dio with buffered i/o ========== 23:51:14 (1713412274) 1+0 records in 1+0 records out 26214400 bytes (26 MB) copied, 0.81274 s, 32.3 MB/s bs: 4096, file_size 26214400 3200+0 records in 3200+0 records out 26214400 bytes (26 MB) copied, 6.01234 s, 4.4 MB/s 3200+0 records in 3200+0 records out 26214400 bytes (26 MB) copied, 27.6758 s, 947 kB/s /mnt/lustre/f398q.sanity.2 has type file OK /mnt/lustre/f398q.sanity.2 has size 26214400 OK bs: 16384, file_size 26214400 800+0 records in 800+0 records out 26214400 bytes (26 MB) copied, 1.50093 s, 17.5 MB/s 800+0 records in 800+0 records out 26214400 bytes (26 MB) copied, 7.57312 s, 3.5 MB/s /mnt/lustre/f398q.sanity.2 has type file OK /mnt/lustre/f398q.sanity.2 has size 26214400 OK bs: 1048576, file_size 26214400 12+1 records in 12+1 records out 26214400 bytes (26 MB) copied, 0.954274 s, 27.5 MB/s 12+1 records in 12+1 records out 26214400 bytes (26 MB) copied, 1.50316 s, 17.4 MB/s /mnt/lustre/f398q.sanity.2 has type file OK /mnt/lustre/f398q.sanity.2 has size 26214400 OK bs: 4194304, file_size 26214400 3+1 records in 3+1 records out 26214400 bytes (26 MB) copied, 0.884747 s, 29.6 MB/s 3+1 records in 3+1 records out 26214400 bytes (26 MB) copied, 1.14998 s, 22.8 MB/s /mnt/lustre/f398q.sanity.2 has type file OK /mnt/lustre/f398q.sanity.2 has size 26214400 OK PASS 398q (44s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 398r: i/o error on file read ============== 23:52:01 (1713412321) fail_loc=0x20f cat: /mnt/lustre/f398r.sanity: Input/output error PASS 398r (58s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 398s: i/o error on mirror file read ======= 23:53:02 (1713412382) fail_loc=0x20f cat: /mnt/lustre/f398s.sanity: Input/output error PASS 398s (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 399a: fake write should not be slower than normal write ========================================================== 23:53:07 (1713412387) debug=0 1000+0 records in 1000+0 records out 1048576000 bytes (1.0 GB) copied, 18.9699 s, 55.3 MB/s fail_loc=0x238 1000+0 records in 1000+0 records out 1048576000 bytes (1.0 GB) copied, 17.5191 s, 59.9 MB/s /mnt/lustre/f399a.sanity has type file OK /mnt/lustre/f399a.sanity has size 1048576000 OK fail_loc=0 fake write 17.543528870 vs. normal write 18.998868019 in seconds PASS 399a (41s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 399b: fake read should not be slower than normal read ========================================================== 23:53:51 (1713412431) debug=0 1000+0 records in 1000+0 records out 1048576000 bytes (1.0 GB) copied, 2.26815 s, 462 MB/s fail_loc=0x238 1000+0 records in 1000+0 records out 1048576000 bytes (1.0 GB) copied, 0.364005 s, 2.9 GB/s fail_loc=0 fake read .383245773 vs. normal read 2.284345113 in seconds PASS 399b (10s) debug_raw_pointers=0 debug_raw_pointers=0 SKIP: sanity test_400a skipping excluded test 400a debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 400b: packaged headers can be compiled ==== 23:54:04 (1713412444) PASS 400b (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 401a: Verify if 'lctl list_param -R' can list parameters recursively ========================================================== 23:54:10 (1713412450) proc_dirs='/proc/fs/lustre/ /sys/fs/lustre/ /sys/kernel/debug/lnet/ /sys/kernel/debug/lustre/' PASS 401a (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 401b: Verify 'lctl {get,set}_param' continue after error ========================================================== 23:54:15 (1713412455) error: set_param: param_path 'foo': No such file or directory error: set_param: setting 'foo'='bar': No such file or directory jobid_name=testing%p error: set_param: param_path 'bar': No such file or directory error: set_param: setting 'bar'='baz': No such file or directory error: get_param: param_path 'foe': No such file or directory error: get_param: param_path 'baz': No such file or directory error: set_param: param_path 'fog': No such file or directory error: set_param: setting 'fog'='bam': No such file or directory error: set_param: param_path 'bat': No such file or directory error: set_param: setting 'bat'='fog': No such file or directory error: get_param: param_path 'foe': No such file or directory error: get_param: param_path 'bag': No such file or directory PASS 401b (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 401c: Verify 'lctl set_param' without value fails in either format. ========================================================== 23:54:20 (1713412460) error: set_param: setting jobid_name: Invalid argument error: set_param: setting jobid_name: Invalid argument PASS 401c (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 401d: Verify 'lctl set_param' accepts values containing '=' ========================================================== 23:54:25 (1713412465) jobid_name=foo=bar%p jobid_name=%e.%u jobid_name=foo=bar%p jobid_name=%e.%u PASS 401d (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 401e: verify 'lctl get_param' works with NID in parameter ========================================================== 23:54:31 (1713412471) ldlm.namespaces.MGC192.168.201.146@tcp ldlm.namespaces.MGC192.168.201.146@tcp.contended_locks ldlm.namespaces.MGC192.168.201.146@tcp.contention_seconds ldlm.namespaces.MGC192.168.201.146@tcp.ctime_age_limit ldlm.namespaces.MGC192.168.201.146@tcp.dirty_age_limit ldlm.namespaces.MGC192.168.201.146@tcp.early_lock_cancel ldlm.namespaces.MGC192.168.201.146@tcp.lock_count ldlm.namespaces.MGC192.168.201.146@tcp.lock_timeouts ldlm.namespaces.MGC192.168.201.146@tcp.lock_unused_count ldlm.namespaces.MGC192.168.201.146@tcp.lru_cancel_batch ldlm.namespaces.MGC192.168.201.146@tcp.lru_max_age ldlm.namespaces.MGC192.168.201.146@tcp.lru_size ldlm.namespaces.MGC192.168.201.146@tcp.max_nolock_bytes ldlm.namespaces.MGC192.168.201.146@tcp.max_parallel_ast ldlm.namespaces.MGC192.168.201.146@tcp.ns_recalc_pct ldlm.namespaces.MGC192.168.201.146@tcp.pool ldlm.namespaces.MGC192.168.201.146@tcp.pool.cancel_rate ldlm.namespaces.MGC192.168.201.146@tcp.pool.client_lock_volume ldlm.namespaces.MGC192.168.201.146@tcp.pool.grant_plan ldlm.namespaces.MGC192.168.201.146@tcp.pool.grant_rate ldlm.namespaces.MGC192.168.201.146@tcp.pool.grant_speed ldlm.namespaces.MGC192.168.201.146@tcp.pool.granted ldlm.namespaces.MGC192.168.201.146@tcp.pool.limit ldlm.namespaces.MGC192.168.201.146@tcp.pool.lock_volume_factor ldlm.namespaces.MGC192.168.201.146@tcp.pool.recalc_period ldlm.namespaces.MGC192.168.201.146@tcp.pool.recalc_time ldlm.namespaces.MGC192.168.201.146@tcp.pool.server_lock_volume ldlm.namespaces.MGC192.168.201.146@tcp.pool.state ldlm.namespaces.MGC192.168.201.146@tcp.pool.stats ldlm.namespaces.MGC192.168.201.146@tcp.resource_count ldlm.namespaces.MGC192.168.201.146@tcp.lru_size=400 PASS 401e (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 402: Return ENOENT to lod_generate_and_set_lovea ========================================================== 23:54:36 (1713412476) fail_loc=0x8000015c touch: cannot touch '/mnt/lustre/d402.sanity/f402.sanity': No such file or directory Touch failed - OK PASS 402 (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 403: i_nlink should not drop to zero due to aliasing ========================================================== 23:54:41 (1713412481) fail_loc=0x80001409 vm.drop_caches = 2 PASS 403 (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 404: validate manual {de}activated works properly for OSPs ========================================================== 23:54:46 (1713412486) Deactivate: lustre-OST0000-osc-MDT0000 Activate: lustre-OST0000-osc-MDT0000 Deactivate: lustre-OST0001-osc-MDT0000 Activate: lustre-OST0001-osc-MDT0000 Deactivate: lustre-OST0000-osc-MDT0001 Activate: lustre-OST0000-osc-MDT0001 Deactivate: lustre-OST0001-osc-MDT0001 Activate: lustre-OST0001-osc-MDT0001 PASS 404 (8s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 405: Various layout swap lock tests ======= 23:54:56 (1713412496) SKIP: sanity test_405 layout swap does not support DOM files so far SKIP 405 (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 406: DNE support fs default striping ====== 23:55:03 (1713412503) Creating new pool oleg146-server: Pool lustre.test_406 created Adding targets to pool oleg146-server: OST lustre-OST0000_UUID added to pool lustre.test_406 oleg146-server: OST lustre-OST0001_UUID added to pool lustre.test_406 Waiting 90s for 'lustre-OST0000_UUID lustre-OST0001_UUID ' - unlinked 0 (time 1713412511 ; total 0 ; last 0) total: 20 unlinks in 1 seconds: 20.000000 unlinks/second Removing all targets from pool oleg146-server: OST lustre-OST0000_UUID removed from pool lustre.test_406 oleg146-server: OST lustre-OST0001_UUID removed from pool lustre.test_406 Destroying pool oleg146-server: Pool lustre.test_406 destroyed PASS 406 (18s) debug_raw_pointers=0 debug_raw_pointers=0 SKIP: sanity test_407 skipping ALWAYS excluded test 407 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 408: drop_caches should not hang due to page leaks ========================================================== 23:55:24 (1713412524) 1+0 records in 1+0 records out 4096 bytes (4.1 kB) copied, 0.00847119 s, 484 kB/s fail_loc=0x8000040a dd: error writing '/mnt/lustre/f408.sanity': Invalid argument 1+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0170306 s, 0.0 kB/s PASS 408 (5s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 409: Large amount of cross-MDTs hard links on the same file ========================================================== 23:55:32 (1713412532) Create 1K hard links start at Wed Apr 17 23:55:32 EDT 2024 total: 1000 link in 5.83 seconds: 171.42 ops/second Links count should be right although linkEA overflow File: '/mnt/lustre/d409.sanity/guard' Size: 0 Blocks: 0 IO Block: 4194304 regular empty file Device: 2c54f966h/743766374d Inode: 162129771587700499 Links: 1001 Access: (0644/-rw-r--r--) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2024-04-17 23:55:32.000000000 -0400 Modify: 2024-04-17 23:55:32.000000000 -0400 Change: 2024-04-17 23:55:39.000000000 -0400 Birth: - List all links start at Wed Apr 17 23:55:40 EDT 2024 Unlink hard links start at Wed Apr 17 23:55:47 EDT 2024 - unlinked 0 (time 1713412548 ; total 0 ; last 0) total: 1000 unlinks in 5 seconds: 200.000000 unlinks/second Unlink hard links finished at Wed Apr 17 23:55:54 EDT 2024 PASS 409 (25s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 410: Test inode number returned from kernel thread ========================================================== 23:55:58 (1713412558) kunit/kinode options: 'run_id=18439 fname=/mnt/lustre/f410.sanity' PASS 410 (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 411a: Slab allocation error with cgroup does not LBUG ========================================================== 23:56:02 (1713412562) 100+0 records in 100+0 records out 104857600 bytes (105 MB) copied, 2.52999 s, 41.4 MB/s bash: line 1: 10620 Killed dd if=/mnt/lustre/f411a.sanity of=/dev/null cache 843776 rss 0 rss_huge 0 mapped_file 0 swap 0 pgpgin 586 pgpgout 380 pgfault 350 pgmajfault 26 inactive_anon 0 active_anon 0 inactive_file 589824 active_file 253952 unevictable 0 hierarchical_memory_limit 1048576 hierarchical_memsw_limit 9223372036854771712 total_cache 843776 total_rss 0 total_rss_huge 0 total_mapped_file 0 total_swap 0 total_pgpgin 0 total_pgpgout 0 total_pgfault 0 total_pgmajfault 0 total_inactive_anon 0 total_active_anon 0 total_inactive_file 589824 total_active_file 253952 total_unevictable 0 PASS 411a (6s) debug_raw_pointers=0 debug_raw_pointers=0 SKIP: sanity test_411b skipping ALWAYS excluded test 411b debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 412: mkdir on specific MDTs =============== 23:56:11 (1713412571) lmv_stripe_count: 2 lmv_stripe_offset: 1 lmv_hash_type: crush mdtidx FID[seq:oid:ver] 1 [0x240002b12:0x1fc:0x0] 0 [0x200005222:0x192:0x0] PASS 412 (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 413a: QoS mkdir with 'lfs mkdir -i -1' ==== 23:56:17 (1713412577) lmv.lustre-clilmv-ffff88012a451000.qos_maxage=1 lod.lustre-MDT0000-mdtlov.mdt_qos_maxage=1 lod.lustre-MDT0001-mdtlov.mdt_qos_maxage=1 Check for uneven MDTs: 0 using cmd fallocate -l 128K UUID 1K-blocks Used Available Use% Mounted on lustre-MDT0000_UUID 1414116 18860 1268828 2% /mnt/lustre[MDT:0] lustre-MDT0001_UUID 1414116 17096 1270592 2% /mnt/lustre[MDT:1] lustre-OST0000_UUID 3833116 21900 3585120 1% /mnt/lustre[OST:0] lustre-OST0001_UUID 3833116 108992 3498028 4% /mnt/lustre[OST:1] filesystem_summary: 7666232 130892 7083148 2% /mnt/lustre UUID Inodes IUsed IFree IUse% Mounted on lustre-MDT0000_UUID 1024000 8633 1015367 1% /mnt/lustre[MDT:0] lustre-MDT0001_UUID 1024000 7131 1016869 1% /mnt/lustre[MDT:1] lustre-OST0000_UUID 262144 14677 247467 6% /mnt/lustre[OST:0] lustre-OST0001_UUID 262144 14613 247531 6% /mnt/lustre[OST:1] filesystem_summary: 510762 15764 494998 4% /mnt/lustre weight diff=0% must be > 120% ...Fill MDT0 with 200 files: loop 0 weight diff=2% must be > 120% ...Fill MDT0 with 200 files: loop 1 weight diff=4% must be > 120% ...Fill MDT0 with 200 files: loop 2 weight diff=7% must be > 120% ...Fill MDT0 with 200 files: loop 3 weight diff=9% must be > 120% ...Fill MDT0 with 200 files: loop 4 weight diff=12% must be > 120% ...Fill MDT0 with 200 files: loop 5 weight diff=14% must be > 120% ...Fill MDT0 with 200 files: loop 6 weight diff=17% must be > 120% ...Fill MDT0 with 200 files: loop 7 weight diff=20% must be > 120% ...Fill MDT0 with 200 files: loop 8 weight diff=23% must be > 120% ...Fill MDT0 with 200 files: loop 9 UUID 1K-blocks Used Available Use% Mounted on lustre-MDT0000_UUID 1414116 282996 1004692 22% /mnt/lustre[MDT:0] lustre-MDT0001_UUID 1414116 17096 1270592 2% /mnt/lustre[MDT:1] lustre-OST0000_UUID 3833116 21900 3585120 1% /mnt/lustre[OST:0] lustre-OST0001_UUID 3833116 108992 3498028 4% /mnt/lustre[OST:1] filesystem_summary: 7666232 130892 7083148 2% /mnt/lustre UUID Inodes IUsed IFree IUse% Mounted on lustre-MDT0000_UUID 1024000 10644 1013356 2% /mnt/lustre[MDT:0] lustre-MDT0001_UUID 1024000 7131 1016869 1% /mnt/lustre[MDT:1] lustre-OST0000_UUID 262144 14677 247467 6% /mnt/lustre[OST:0] lustre-OST0001_UUID 262144 14613 247531 6% /mnt/lustre[OST:1] filesystem_summary: 512773 17775 494998 4% /mnt/lustre weight diff=26% must be > 120% ...Fill MDT0 with 200 files: loop 10 weight diff=30% must be > 120% ...Fill MDT0 with 200 files: loop 11 weight diff=34% must be > 120% ...Fill MDT0 with 200 files: loop 12 weight diff=37% must be > 120% ...Fill MDT0 with 200 files: loop 13 weight diff=41% must be > 120% ...Fill MDT0 with 200 files: loop 14 weight diff=46% must be > 120% ...Fill MDT0 with 200 files: loop 15 weight diff=50% must be > 120% ...Fill MDT0 with 200 files: loop 16 weight diff=55% must be > 120% ...Fill MDT0 with 200 files: loop 17 weight diff=60% must be > 120% ...Fill MDT0 with 200 files: loop 18 weight diff=66% must be > 120% ...Fill MDT0 with 200 files: loop 19 UUID 1K-blocks Used Available Use% Mounted on lustre-MDT0000_UUID 1414116 547212 740476 43% /mnt/lustre[MDT:0] lustre-MDT0001_UUID 1414116 17096 1270592 2% /mnt/lustre[MDT:1] lustre-OST0000_UUID 3833116 21900 3585120 1% /mnt/lustre[OST:0] lustre-OST0001_UUID 3833116 108992 3498028 4% /mnt/lustre[OST:1] filesystem_summary: 7666232 130892 7083148 2% /mnt/lustre UUID Inodes IUsed IFree IUse% Mounted on lustre-MDT0000_UUID 1024000 12654 1011346 2% /mnt/lustre[MDT:0] lustre-MDT0001_UUID 1024000 7131 1016869 1% /mnt/lustre[MDT:1] lustre-OST0000_UUID 262144 14677 247467 6% /mnt/lustre[OST:0] lustre-OST0001_UUID 262144 14613 247531 6% /mnt/lustre[OST:1] filesystem_summary: 514783 19785 494998 4% /mnt/lustre weight diff=72% must be > 120% ...Fill MDT0 with 200 files: loop 20 weight diff=78% must be > 120% ...Fill MDT0 with 200 files: loop 21 weight diff=85% must be > 120% ...Fill MDT0 with 200 files: loop 22 weight diff=93% must be > 120% ...Fill MDT0 with 200 files: loop 23 weight diff=101% must be > 120% ...Fill MDT0 with 200 files: loop 24 weight diff=110% must be > 120% ...Fill MDT0 with 200 files: loop 25 weight diff=119% must be > 120% ...Fill MDT0 with 200 files: loop 26 MDT filesfree available: 1009939 1016869 MDT blocks available: 555524 1270592 weight diff=130% Mkdir (stripe_count 1) roundrobin: 120 directories created on MDT0 120 directories created on MDT1 Check for uneven MDTs: stripe_count=1 min_idx=0 max_idx=1 Check for uneven MDTs: 0 using cmd fallocate -l 128K MDT filesfree available: 1009816 1016628 MDT blocks available: 554888 1269460 weight diff=130% MDT filesfree available: 1009816 1016628 MDT blocks available: 554888 1269460 weight diff=130% Mkdir (stripe_count 1) with balanced space usage: 70 directories created on MDT0 : curmax=70 170 directories created on MDT1 : curmax=170 - unlinked 0 (time 1713412758 ; total 0 ; last 0) total: 240 unlinks in 1 seconds: 240.000000 unlinks/second - unlinked 0 (time 1713412760 ; total 0 ; last 0) total: 240 unlinks in 1 seconds: 240.000000 unlinks/second PASS 413a (187s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 413b: QoS mkdir under dir whose default LMV starting MDT offset is -1 ========================================================== 23:59:26 (1713412766) lmv.lustre-clilmv-ffff88012a451000.qos_maxage=1 lod.lustre-MDT0000-mdtlov.mdt_qos_maxage=1 lod.lustre-MDT0001-mdtlov.mdt_qos_maxage=1 Check for uneven MDTs: 0 using cmd fallocate -l 128K MDT filesfree available: 1009936 1016868 MDT blocks available: 554940 1270012 weight diff=130% getfattr: Removing leading '/' from absolute path names trusted.dmv=0xd00cd30c01000000ffffffff000000000000000000020000000000000000000000000000000000000000000000000000 defstripe: 'lmv_stripe_count: 1 lmv_stripe_offset: -1 lmv_hash_type: none lmv_max_inherit: -1 lmv_max_inherit_rr: 2' Mkdir (stripe_count 1) roundrobin: 120 directories created on MDT0 120 directories created on MDT1 Check for uneven MDTs: stripe_count=1 min_idx=0 max_idx=1 Check for uneven MDTs: 0 using cmd fallocate -l 128K MDT filesfree available: 1009814 1016625 MDT blocks available: 554328 1268880 weight diff=130% MDT filesfree available: 1009814 1016625 MDT blocks available: 554328 1268880 weight diff=130% Mkdir (stripe_count 1) with balanced space usage: 68 directories created on MDT0 : curmax=68 172 directories created on MDT1 : curmax=172 - unlinked 0 (time 1713412800 ; total 0 ; last 0) total: 240 unlinks in 1 seconds: 240.000000 unlinks/second - unlinked 0 (time 1713412802 ; total 0 ; last 0) total: 240 unlinks in 1 seconds: 240.000000 unlinks/second PASS 413b (39s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 413c: mkdir with default LMV max inherit rr ========================================================== 00:00:08 (1713412808) lmv.lustre-clilmv-ffff88012a451000.qos_maxage=1 lod.lustre-MDT0000-mdtlov.mdt_qos_maxage=1 lod.lustre-MDT0001-mdtlov.mdt_qos_maxage=1 Check for uneven MDTs: 0 using cmd fallocate -l 128K MDT filesfree available: 1009934 1016865 MDT blocks available: 554396 1269476 weight diff=130% getfattr: Removing leading '/' from absolute path names trusted.dmv=0xd00cd30c01000000ffffffff000000000000000000000000000000000000000000000000000000000000000000000000 defstripe: 'lmv_stripe_count: 1 lmv_stripe_offset: -1 lmv_hash_type: none lmv_max_inherit: -1 lmv_max_inherit_rr: 0' Mkdir (stripe_count 1) on stripe 1 0 directories created on MDT0 240 directories created on MDT1 Check for uneven MDTs: stripe_count=1 min_idx=0 max_idx=1 Check for uneven MDTs: 0 using cmd fallocate -l 128K MDT filesfree available: 1009932 1016622 MDT blocks available: 554388 1268484 weight diff=130% MDT filesfree available: 1009932 1016622 MDT blocks available: 554388 1268484 weight diff=130% Mkdir (stripe_count 1) with balanced space usage: 70 directories created on MDT0 : curmax=70 170 directories created on MDT1 : curmax=170 - unlinked 0 (time 1713412842 ; total 0 ; last 0) total: 240 unlinks in 1 seconds: 240.000000 unlinks/second - unlinked 0 (time 1713412844 ; total 0 ; last 0) total: 240 unlinks in 1 seconds: 240.000000 unlinks/second PASS 413c (38s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 413d: inherit ROOT default LMV ============ 00:00:49 (1713412849) total: 200 mkdir in 0.67 seconds: 299.71 ops/second PASS 413d (8s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 413e: check default max-inherit value ===== 00:00:59 (1713412859) PASS 413e (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 413f: lfs getdirstripe -D list ROOT default LMV if it's not set on dir ========================================================== 00:01:04 (1713412864) PASS 413f (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 413g: enforce ROOT default LMV on subdir mount ========================================================== 00:01:10 (1713412870) Starting client: oleg146-client.virtnet: -o user_xattr,flock oleg146-server@tcp:/lustre/d413g.sanity/l2/l3/l4 /mnt/lustre2 total: 200 mkdir in 1.72 seconds: 115.95 ops/second 192.168.201.146@tcp:/lustre/d413g.sanity/l2/l3/l4 /mnt/lustre2 lustre rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project 0 0 Stopping client oleg146-client.virtnet /mnt/lustre2 (opts:) PASS 413g (9s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 413h: don't stick to parent for round-robin dirs ========================================================== 00:01:22 (1713412882) lmv.lustre-clilmv-ffff88012a451000.qos_maxage=1 Check for uneven MDTs: 0 using cmd fallocate -l 128K MDT filesfree available: 1009620 1016511 MDT blocks available: 552540 1267504 weight diff=130% dir=/mnt/lustre/d413h.sanity/l1/l2/l3/l4/l5: 28 1 12 0 dir=/mnt/lustre/d413h.sanity/l1/l2/l3/l4/l5/d0: 23 1 17 0 dir=/mnt/lustre/d413h.sanity/l1/l2/l3/l4/l5/d0/d0: 27 1 13 0 dir=/mnt/lustre/d413h.sanity/l1/l2/l3/l4/l5/d0/d0/d0: 27 1 13 0 dir=/mnt/lustre/d413h.sanity/l1/l2/l3/l4/l5/d0/d0/d0/d0: 40 1 dir=/mnt/lustre/d413h.sanity/l1/l2/l3/l4/l5/d0/d0/d0/d0/d0: 40 1 dir=/mnt/lustre/d413h.sanity/l1/l2/l3/l4/l5/d0/d0/d0/d0/d0/d0: 40 1 PASS 413h (27s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 413i: check default layout inheritance ==== 00:01:51 (1713412911) PASS 413i (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 413j: set default LMV by setxattr ========= 00:01:57 (1713412917) getfattr: Removing leading '/' from absolute path names getfattr: Removing leading '/' from absolute path names getfattr: Removing leading '/' from absolute path names # file: mnt/lustre/d413j.sanity/sub trusted.dmv=0s0AzTDAIAAAD/////AAAAAAAAAAACAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA /mnt/lustre/d413j.sanity/sub: trusted.dmv: No such attribute PASS 413j (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 413k: QoS mkdir exclude prefixes ========== 00:02:03 (1713412923) lmv.lustre-clilmv-ffff88012a451000.qos_exclude_prefixes=+abc:123:foo bar lmv.lustre-clilmv-ffff88012a451000.qos_exclude_prefixes=-abc:123:foo bar lmv.lustre-clilmv-ffff88012a451000.qos_exclude_prefixes=_temporary PASS 413k (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 413z: 413 test cleanup ==================== 00:02:09 (1713412929) oleg146-server: ssh_exchange_identification: Connection closed by remote host pdsh@oleg146-client: oleg146-server: ssh exited with exit code 255 oleg146-server: ssh_exchange_identification: Connection closed by remote host pdsh@oleg146-client: oleg146-server: ssh exited with exit code 255 oleg146-server: ssh_exchange_identification: Connection closed by remote host pdsh@oleg146-client: oleg146-server: ssh exited with exit code 255 oleg146-server: ssh_exchange_identification: Connection closed by remote host pdsh@oleg146-client: oleg146-server: ssh exited with exit code 255 oleg146-server: ssh_exchange_identification: Connection closed by remote host pdsh@oleg146-client: oleg146-server: ssh exited with exit code 255 oleg146-server: ssh_exchange_identification: Connection closed by remote host pdsh@oleg146-client: oleg146-server: ssh exited with exit code 255 - unlinked 0 (time 1713412930 ; total 0 ; last 0) total: 200 unlinks in 1 seconds: 200.000000 unlinks/second - unlinked 0 (time 1713412930 ; total 0 ; last 0) total: 200 unlinks in 1 seconds: 200.000000 unlinks/second - unlinked 0 (time 1713412930 ; total 0 ; last 0) total: 200 unlinks in 1 seconds: 200.000000 unlinks/second - unlinked 0 (time 1713412930 ; total 0 ; last 0) total: 200 unlinks in 1 seconds: 200.000000 unlinks/second - unlinked 0 (time 1713412930 ; total 0 ; last 0) total: 200 unlinks in 1 seconds: 200.000000 unlinks/second oleg146-client: ssh_exchange_identification: Connection closed by remote host pdsh@oleg146-client: oleg146-client: ssh exited with exit code 255 oleg146-client: ssh_exchange_identification: Connection closed by remote host pdsh@oleg146-client: oleg146-client: ssh exited with exit code 255 - unlinked 0 (time 1713412931 ; total 0 ; last 0) total: 200 unlinks in 1 seconds: 200.000000 unlinks/second - unlinked 0 (time 1713412932 ; total 0 ; last 0) total: 200 unlinks in 0 seconds: inf unlinks/second - unlinked 0 (time 1713412932 ; total 0 ; last 0) total: 200 unlinks in 0 seconds: inf unlinks/second - unlinked 0 (time 1713412932 ; total 0 ; last 0) total: 200 unlinks in 0 seconds: inf unlinks/second - unlinked 0 (time 1713412932 ; total 0 ; last 0) total: 200 unlinks in 0 seconds: inf unlinks/second - unlinked 0 (time 1713412932 ; total 0 ; last 0) total: 200 unlinks in 0 seconds: inf unlinks/second - unlinked 0 (time 1713412932 ; total 0 ; last 0) total: 200 unlinks in 0 seconds: inf unlinks/second - unlinked 0 (time 1713412932 ; total 0 ; last 0) total: 200 unlinks in 0 seconds: inf unlinks/second - unlinked 0 (time 1713412932 ; total 0 ; last 0) total: 200 unlinks in 0 seconds: inf unlinks/second - unlinked 0 (time 1713412932 ; total 0 ; last 0) total: 200 unlinks in 0 seconds: inf unlinks/second - unlinked 0 (time 1713412932 ; total 0 ; last 0) total: 200 unlinks in 0 seconds: inf unlinks/second - unlinked 0 (time 1713412932 ; total 0 ; last 0) total: 200 unlinks in 1 seconds: 200.000000 unlinks/second - unlinked 0 (time 1713412932 ; total 0 ; last 0) total: 200 unlinks in 1 seconds: 200.000000 unlinks/second - unlinked 0 (time 1713412932 ; total 0 ; last 0) total: 200 unlinks in 1 seconds: 200.000000 unlinks/second - unlinked 0 (time 1713412932 ; total 0 ; last 0) total: 200 unlinks in 1 seconds: 200.000000 unlinks/second - unlinked 0 (time 1713412932 ; total 0 ; last 0) total: 200 unlinks in 1 seconds: 200.000000 unlinks/second - unlinked 0 (time 1713412932 ; total 0 ; last 0) total: 200 unlinks in 1 seconds: 200.000000 unlinks/second - unlinked 0 (time 1713412932 ; total 0 ; last 0) total: 200 unlinks in 1 seconds: 200.000000 unlinks/second - unlinked 0 (time 1713412932 ; total 0 ; last 0) total: 200 unlinks in 1 seconds: 200.000000 unlinks/second - unlinked 0 (time 1713412932 ; total 0 ; last 0) total: 200 unlinks in 1 seconds: 200.000000 unlinks/second - unlinked 0 (time 1713412932 ; total 0 ; last 0) total: 200 unlinks in 1 seconds: 200.000000 unlinks/second - unlinked 0 (time 1713412932 ; total 0 ; last 0) total: 200 unlinks in 1 seconds: 200.000000 unlinks/second PASS 413z (7s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 414: simulate ENOMEM in ptlrpc_register_bulk() ========================================================== 00:02:18 (1713412938) fail_loc=0x80000521 1+0 records in 1+0 records out 2097152 bytes (2.1 MB) copied, 0.170532 s, 12.3 MB/s PASS 414 (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 415: lock revoke is not missing =========== 00:02:24 (1713412944) striped dir -i1 -c2 -H crush2 /mnt/lustre/d415.sanity total: 500 open/close in 2.10 seconds: 238.15 ops/second rename 500 files without 'touch' took 11 sec rename 500 files with 'touch' took 16 sec /home/green/git/lustre-release/lustre/tests/test-framework.sh: line 4699: 7630 Killed ( while true; do touch $DIR/$tdir; done ) (wd: ~) - unlinked 0 (time 1713412980 ; total 0 ; last 0) total: 500 unlinks in 2 seconds: 250.000000 unlinks/second PASS 415 (40s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 416: transaction start failure won't cause system hung ========================================================== 00:03:05 (1713412985) fail_loc=0x19a lfs mkdir: dirstripe error on '/mnt/lustre/d416.sanity': Input/output error lfs setdirstripe: cannot create dir '/mnt/lustre/d416.sanity': Input/output error PASS 416 (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 417: disable remote dir, striped dir and dir migration ========================================================== 00:03:11 (1713412991) lfs migrate: /mnt/lustre/d417.sanity.1 migrate failed: Operation not permitted (1) lfs mkdir: dirstripe error on '/mnt/lustre/d417.sanity.2': Operation not permitted lfs setdirstripe: cannot create dir '/mnt/lustre/d417.sanity.2': Operation not permitted lfs mkdir: dirstripe error on '/mnt/lustre/d417.sanity.3': Operation not permitted lfs setdirstripe: cannot create dir '/mnt/lustre/d417.sanity.3': Operation not permitted PASS 417 (8s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 418: df and lfs df outputs match ========== 00:03:20 (1713413000) Waiting for MDT destroys to complete striped dir -i0 -c2 -H crush2 /mnt/lustre/d418.sanity ldlm.namespaces.MGC192.168.201.146@tcp.lru_size=clear ldlm.namespaces.lustre-MDT0000-mdc-ffff88012a451000.lru_size=clear ldlm.namespaces.lustre-MDT0001-mdc-ffff88012a451000.lru_size=clear ldlm.namespaces.lustre-OST0000-osc-ffff88012a451000.lru_size=clear ldlm.namespaces.lustre-OST0001-osc-ffff88012a451000.lru_size=clear ldlm.namespaces.MGC192.168.201.146@tcp.lru_size=clear ldlm.namespaces.lustre-MDT0000-mdc-ffff88012a451000.lru_size=clear ldlm.namespaces.lustre-MDT0001-mdc-ffff88012a451000.lru_size=clear ldlm.namespaces.lustre-OST0000-osc-ffff88012a451000.lru_size=clear ldlm.namespaces.lustre-OST0001-osc-ffff88012a451000.lru_size=clear Creating a single file and testing ldlm.namespaces.MGC192.168.201.146@tcp.lru_size=clear ldlm.namespaces.lustre-MDT0000-mdc-ffff88012a451000.lru_size=clear ldlm.namespaces.lustre-MDT0001-mdc-ffff88012a451000.lru_size=clear ldlm.namespaces.lustre-OST0000-osc-ffff88012a451000.lru_size=clear ldlm.namespaces.lustre-OST0001-osc-ffff88012a451000.lru_size=clear ldlm.namespaces.MGC192.168.201.146@tcp.lru_size=clear ldlm.namespaces.lustre-MDT0000-mdc-ffff88012a451000.lru_size=clear ldlm.namespaces.lustre-MDT0001-mdc-ffff88012a451000.lru_size=clear ldlm.namespaces.lustre-OST0000-osc-ffff88012a451000.lru_size=clear ldlm.namespaces.lustre-OST0001-osc-ffff88012a451000.lru_size=clear Creating 3418 files and testing Writing 61 4K blocks and testing ldlm.namespaces.MGC192.168.201.146@tcp.lru_size=clear ldlm.namespaces.lustre-MDT0000-mdc-ffff88012a451000.lru_size=clear ldlm.namespaces.lustre-MDT0001-mdc-ffff88012a451000.lru_size=clear ldlm.namespaces.lustre-OST0000-osc-ffff88012a451000.lru_size=clear ldlm.namespaces.lustre-OST0001-osc-ffff88012a451000.lru_size=clear ldlm.namespaces.MGC192.168.201.146@tcp.lru_size=clear ldlm.namespaces.lustre-MDT0000-mdc-ffff88012a451000.lru_size=clear ldlm.namespaces.lustre-MDT0001-mdc-ffff88012a451000.lru_size=clear ldlm.namespaces.lustre-OST0000-osc-ffff88012a451000.lru_size=clear ldlm.namespaces.lustre-OST0001-osc-ffff88012a451000.lru_size=clear ldlm.namespaces.MGC192.168.201.146@tcp.lru_size=clear ldlm.namespaces.lustre-MDT0000-mdc-ffff88012a451000.lru_size=clear ldlm.namespaces.lustre-MDT0001-mdc-ffff88012a451000.lru_size=clear ldlm.namespaces.lustre-OST0000-osc-ffff88012a451000.lru_size=clear ldlm.namespaces.lustre-OST0001-osc-ffff88012a451000.lru_size=clear ldlm.namespaces.MGC192.168.201.146@tcp.lru_size=clear ldlm.namespaces.lustre-MDT0000-mdc-ffff88012a451000.lru_size=clear ldlm.namespaces.lustre-MDT0001-mdc-ffff88012a451000.lru_size=clear ldlm.namespaces.lustre-OST0000-osc-ffff88012a451000.lru_size=clear ldlm.namespaces.lustre-OST0001-osc-ffff88012a451000.lru_size=clear ldlm.namespaces.MGC192.168.201.146@tcp.lru_size=clear ldlm.namespaces.lustre-MDT0000-mdc-ffff88012a451000.lru_size=clear ldlm.namespaces.lustre-MDT0001-mdc-ffff88012a451000.lru_size=clear ldlm.namespaces.lustre-OST0000-osc-ffff88012a451000.lru_size=clear ldlm.namespaces.lustre-OST0001-osc-ffff88012a451000.lru_size=clear ldlm.namespaces.MGC192.168.201.146@tcp.lru_size=clear ldlm.namespaces.lustre-MDT0000-mdc-ffff88012a451000.lru_size=clear ldlm.namespaces.lustre-MDT0001-mdc-ffff88012a451000.lru_size=clear ldlm.namespaces.lustre-OST0000-osc-ffff88012a451000.lru_size=clear ldlm.namespaces.lustre-OST0001-osc-ffff88012a451000.lru_size=clear ldlm.namespaces.MGC192.168.201.146@tcp.lru_size=clear ldlm.namespaces.lustre-MDT0000-mdc-ffff88012a451000.lru_size=clear ldlm.namespaces.lustre-MDT0001-mdc-ffff88012a451000.lru_size=clear ldlm.namespaces.lustre-OST0000-osc-ffff88012a451000.lru_size=clear ldlm.namespaces.lustre-OST0001-osc-ffff88012a451000.lru_size=clear ldlm.namespaces.MGC192.168.201.146@tcp.lru_size=clear ldlm.namespaces.lustre-MDT0000-mdc-ffff88012a451000.lru_size=clear ldlm.namespaces.lustre-MDT0001-mdc-ffff88012a451000.lru_size=clear ldlm.namespaces.lustre-OST0000-osc-ffff88012a451000.lru_size=clear ldlm.namespaces.lustre-OST0001-osc-ffff88012a451000.lru_size=clear ldlm.namespaces.MGC192.168.201.146@tcp.lru_size=clear ldlm.namespaces.lustre-MDT0000-mdc-ffff88012a451000.lru_size=clear ldlm.namespaces.lustre-MDT0001-mdc-ffff88012a451000.lru_size=clear ldlm.namespaces.lustre-OST0000-osc-ffff88012a451000.lru_size=clear ldlm.namespaces.lustre-OST0001-osc-ffff88012a451000.lru_size=clear ldlm.namespaces.MGC192.168.201.146@tcp.lru_size=clear ldlm.namespaces.lustre-MDT0000-mdc-ffff88012a451000.lru_size=clear ldlm.namespaces.lustre-MDT0001-mdc-ffff88012a451000.lru_size=clear ldlm.namespaces.lustre-OST0000-osc-ffff88012a451000.lru_size=clear ldlm.namespaces.lustre-OST0001-osc-ffff88012a451000.lru_size=clear ldlm.namespaces.MGC192.168.201.146@tcp.lru_size=clear ldlm.namespaces.lustre-MDT0000-mdc-ffff88012a451000.lru_size=clear ldlm.namespaces.lustre-MDT0001-mdc-ffff88012a451000.lru_size=clear ldlm.namespaces.lustre-OST0000-osc-ffff88012a451000.lru_size=clear ldlm.namespaces.lustre-OST0001-osc-ffff88012a451000.lru_size=clear ldlm.namespaces.MGC192.168.201.146@tcp.lru_size=clear ldlm.namespaces.lustre-MDT0000-mdc-ffff88012a451000.lru_size=clear ldlm.namespaces.lustre-MDT0001-mdc-ffff88012a451000.lru_size=clear ldlm.namespaces.lustre-OST0000-osc-ffff88012a451000.lru_size=clear ldlm.namespaces.lustre-OST0001-osc-ffff88012a451000.lru_size=clear ldlm.namespaces.MGC192.168.201.146@tcp.lru_size=clear ldlm.namespaces.lustre-MDT0000-mdc-ffff88012a451000.lru_size=clear ldlm.namespaces.lustre-MDT0001-mdc-ffff88012a451000.lru_size=clear ldlm.namespaces.lustre-OST0000-osc-ffff88012a451000.lru_size=clear ldlm.namespaces.lustre-OST0001-osc-ffff88012a451000.lru_size=clear ldlm.namespaces.MGC192.168.201.146@tcp.lru_size=clear ldlm.namespaces.lustre-MDT0000-mdc-ffff88012a451000.lru_size=clear ldlm.namespaces.lustre-MDT0001-mdc-ffff88012a451000.lru_size=clear ldlm.namespaces.lustre-OST0000-osc-ffff88012a451000.lru_size=clear ldlm.namespaces.lustre-OST0001-osc-ffff88012a451000.lru_size=clear PASS 418 (40s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 419: Verify open file by name doesn't crash kernel ========================================================== 00:04:02 (1713413042) fail_loc=0x1410 fail_loc=0 PASS 419 (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 420: clear SGID bit on non-directories for non-members ========================================================== 00:04:07 (1713413047) drwxrwsrwt 2 0 0 4096 Apr 18 00:04 /mnt/lustre/d420.sanity/testdir Succeed in opening file "/mnt/lustre/d420.sanity/testdir/testfile"(flags=O_RDONLY, mode=2755) -rwxr-xr-x 1 500 0 0 Apr 18 00:04 /mnt/lustre/d420.sanity/testdir/testfile PASS 420 (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 421a: simple rm by fid ==================== 00:04:13 (1713413053) striped dir -i1 -c2 -H all_char /mnt/lustre/d421a.sanity total: 3 open/close in 0.05 seconds: 55.06 ops/second stat: cannot stat '/mnt/lustre/d421a.sanity/f1': No such file or directory stat: cannot stat '/mnt/lustre/d421a.sanity/f2': No such file or directory total: 3 open/close in 0.03 seconds: 113.77 ops/second remove using fsname lustre PASS 421a (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 421b: rm by fid on open file ============== 00:04:18 (1713413058) striped dir -i1 -c2 -H all_char /mnt/lustre/d421b.sanity total: 3 open/close in 0.05 seconds: 56.21 ops/second multiop /mnt/lustre/d421b.sanity/f1 vo_c TMPPIPE=/tmp/multiop_open_wait_pipe.7509 lfs rmfid: cannot remove [0x200005224:0x27fe:0x0]: Device or resource busy PASS 421b (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 421c: rm by fid against hardlinked files == 00:04:23 (1713413063) striped dir -i1 -c2 -H fnv_1a_64 /mnt/lustre/d421c.sanity total: 3 open/close in 0.05 seconds: 58.62 ops/second total: 180 link in 1.08 seconds: 167.10 ops/second PASS 421c (6s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 421d: rmfid en masse ====================== 00:04:32 (1713413072) striped dir -i1 -c2 -H fnv_1a_64 /mnt/lustre/d421d.sanity total: 4097 open/close in 7.83 seconds: 523.58 ops/second PASS 421d (20s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 421e: rmfid in DNE ======================== 00:04:52 (1713413092) total: 512 open/close in 1.00 seconds: 512.57 ops/second PASS 421e (5s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 421f: rmfid checks permissions ============ 00:04:59 (1713413099) striped dir -i1 -c2 -H fnv_1a_64 /mnt/lustre/d421f.sanity running as uid/gid/euid/egid 500/500/500/500, groups: [/home/green/git/lustre-release/lustre/utils/lfs] [rmfid] [/mnt/lustre] [[0x200005224:0x30f8:0x0]] lfs rmfid: cannot remove FIDs: Operation not permitted total 252 drwxrwxrwx 2 root root 8192 Apr 18 00:05 . drwxrwxrwx 226 root sanityusr 245760 Apr 18 00:05 .. -rw-r--r-- 1 root root 0 Apr 18 00:05 f running as uid/gid/euid/egid 500/500/500/500, groups: [/home/green/git/lustre-release/lustre/utils/lfs] [rmfid] [/mnt/lustre] [[0x200005224:0x30f8:0x0]] lfs rmfid: cannot remove FIDs: Operation not permitted running as uid/gid/euid/egid 500/500/500/500, groups: [touch] [/mnt/lustre/d421f.sanity/f] rmfid as root running as uid/gid/euid/egid 500/500/500/500, groups: [touch] [/mnt/lustre/d421f.sanity/f] running as uid/gid/euid/egid 500/500/500/500, groups: [/home/green/git/lustre-release/lustre/utils/lfs] [rmfid] [/mnt/lustre] [[0x200005224:0x30fa:0x0]] lfs rmfid: cannot remove FIDs: Operation not permitted Starting client: oleg146-client.virtnet: -o user_xattr,flock,user_fid2path oleg146-server@tcp:/lustre /tmp/lustre-nclFcj running as uid/gid/euid/egid 500/500/500/500, groups: [/home/green/git/lustre-release/lustre/utils/lfs] [rmfid] [/tmp/lustre-nclFcj] [[0x200005224:0x30fa:0x0]] total 252 drwxrwxrwx 2 root root 8192 Apr 18 00:05 . drwxrwxrwx 226 root sanityusr 245760 Apr 18 00:05 .. -rw-r--r-- 1 root root 0 Apr 18 00:05 f running as uid/gid/euid/egid 500/500/500/500, groups: [/home/green/git/lustre-release/lustre/utils/lfs] [rmfid] [/tmp/lustre-nclFcj] [[0x200005227:0x1:0x0]] lfs rmfid: cannot remove [0x200005227:0x1:0x0]: Permission denied 192.168.201.146@tcp:/lustre /tmp/lustre-nclFcj lustre rw,checksum,flock,user_xattr,lruresize,lazystatfs,user_fid2path,verbose,noencrypt,statfs_project 0 0 Stopping client oleg146-client.virtnet /tmp/lustre-nclFcj (opts:) PASS 421f (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 421g: rmfid to return errors properly ===== 00:05:03 (1713413103) total: 512 open/close in 1.12 seconds: 455.45 ops/second PASS 421g (5s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 421h: rmfid with fileset mount ============ 00:05:10 (1713413110) striped dir -i1 -c2 -H all_char /mnt/lustre/d421h.sanity striped dir -i1 -c2 -H all_char /mnt/lustre/d421h.sanity/subdir File /mnt/lustre/d421h.sanity/subdir/file0 FID [0x240002b13:0x1df8:0x0] File /mnt/lustre/d421h.sanity/subdir/fileA FID [0x200005224:0x31f1:0x0] File /mnt/lustre/d421h.sanity/subdir/fileB FID [0x240002b13:0x1df9:0x0] File /mnt/lustre/d421h.sanity/subdir/fileC FID [0x200005224:0x31f2:0x0] File /mnt/lustre/d421h.sanity/fileD FID [0x240002b13:0x1dfa:0x0] Starting client: oleg146-client.virtnet: -o user_xattr,flock oleg146-server@tcp:/lustre/d421h.sanity/subdir /mnt/lustre_other Removing FIDs: /home/green/git/lustre-release/lustre/utils/lfs rmfid /mnt/lustre_other [0x240002b13:0x1df8:0x0] [0x200005224:0x31f1:0x0] [0x240002b13:0x1dfa:0x0] [0x240002b13:0x1df9:0x0] [0x200005224:0x31f2:0x0] lfs rmfid: cannot remove [0x200005224:0x31f2:0x0]: No such file or directory lfs rmfid: cannot remove [0x240002b13:0x1dfa:0x0]: No such file or directory lfs rmfid: cannot remove [0x240002b13:0x1df8:0x0]: No such file or directory 192.168.201.146@tcp:/lustre/d421h.sanity/subdir /mnt/lustre_other lustre rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project 0 0 Stopping client oleg146-client.virtnet /mnt/lustre_other (opts:) stat: cannot stat '/mnt/lustre/d421h.sanity/subdir/fileA': No such file or directory stat: cannot stat '/mnt/lustre/d421h.sanity/subdir/fileB': No such file or directory File: '/mnt/lustre/d421h.sanity/subdir/fileC' Size: 0 Blocks: 0 IO Block: 4194304 regular empty file Device: 2c54f966h/743766374d Inode: 144115540867166706 Links: 2 Access: (0644/-rw-r--r--) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2024-04-18 00:05:11.000000000 -0400 Modify: 2024-04-18 00:05:11.000000000 -0400 Change: 2024-04-18 00:05:11.000000000 -0400 Birth: - File: '/mnt/lustre/d421h.sanity/fileD' Size: 0 Blocks: 0 IO Block: 4194304 regular empty file Device: 2c54f966h/743766374d Inode: 162129771587706362 Links: 1 Access: (0644/-rw-r--r--) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2024-04-18 00:05:11.000000000 -0400 Modify: 2024-04-18 00:05:11.000000000 -0400 Change: 2024-04-18 00:05:11.000000000 -0400 Birth: - PASS 421h (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 422: kill a process with RPC in progress == 00:05:15 (1713413115) striped dir -i0 -c1 -H crush2 /mnt/lustre/d422.sanity/d1 striped dir -i0 -c1 -H crush2 /mnt/lustre/d422.sanity/d2 striped dir -i0 -c1 -H crush /mnt/lustre/d422.sanity/d3 1+0 records in 1+0 records out 1024 bytes (1.0 kB) copied, 0.00555056 s, 184 kB/s 1+0 records in 1+0 records out 1024 bytes (1.0 kB) copied, 0.00576667 s, 178 kB/s at_max=0 at_max=0 fail_loc=0x8000050a fail_val=50000 fail_loc=0x80000722 fail_val=45 kill 22034 /home/green/git/lustre-release/lustre/tests/sanity.sh: line 30138: 22034 Killed mv $DIR/$tdir/d1/file1 $DIR/$tdir/d1/file2 at_max=600 at_max=600 [11548.326330] Lustre: mdt00_008: service thread pid 3215 was inactive for 40.030 seconds. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes: [11550.630298] Lustre: mdt_io00_003: service thread pid 20382 was inactive for 40.101 seconds. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes: PASS 422 (64s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 423: statfs should return a right data ==== 00:06:22 (1713413182) PASS 423 (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 424: simulate ENOMEM in ptl_send_rpc bulk reply ME attach ========================================================== 00:06:28 (1713413188) fail_loc=0x80000522 1+0 records in 1+0 records out 2097152 bytes (2.1 MB) copied, 0.175718 s, 11.9 MB/s PASS 424 (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 425: lock count should not exceed lru size ========================================================== 00:06:33 (1713413193) striped dir -i1 -c2 -H crush2 /mnt/lustre/d425.sanity ldlm.namespaces.MGC192.168.201.146@tcp.lru_size=100 ldlm.namespaces.lustre-MDT0000-mdc-ffff88012a451000.lru_size=100 ldlm.namespaces.lustre-MDT0001-mdc-ffff88012a451000.lru_size=100 ldlm.namespaces.lustre-OST0000-osc-ffff88012a451000.lru_size=100 ldlm.namespaces.lustre-OST0001-osc-ffff88012a451000.lru_size=100 ldlm.namespaces.MGC192.168.201.146@tcp.lru_size=0 ldlm.namespaces.lustre-MDT0000-mdc-ffff88012a451000.lru_size=0 ldlm.namespaces.lustre-MDT0001-mdc-ffff88012a451000.lru_size=0 ldlm.namespaces.lustre-OST0000-osc-ffff88012a451000.lru_size=0 ldlm.namespaces.lustre-OST0001-osc-ffff88012a451000.lru_size=0 PASS 425 (12s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 426: splice test on Lustre ================ 00:06:47 (1713413207) splice-test: splice: Bad address concurrent reader with O_DIRECT read: /mnt/lustre/f426.sanity: unexpected EOF concurrent reader with O_DIRECT concurrent reader without O_DIRECT concurrent reader without O_DIRECT splice-test: splice: Bad address sequential reader with O_DIRECT sequential reader without O_DIRECT PASS 426 (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 427: Failed DNE2 update request shouldn't corrupt updatelog ========================================================== 00:06:51 (1713413211) striped dir -i1 -c2 -H crush2 /mnt/lustre/d427.sanity/1/dir striped dir -i1 -c2 -H all_char /mnt/lustre/d427.sanity/2/dir2 lmv_stripe_count: 2 lmv_stripe_offset: 1 lmv_hash_type: crush2 mdtidx FID[seq:oid:ver] 1 [0x240002b12:0x20c:0x0] 0 [0x200005222:0x1a2:0x0] fail_loc=0x80001708 setfattr: /mnt/lustre/d427.sanity/1/dir: No such file or directory Failing mds2 on oleg146-server Stopping /mnt/lustre-mds2 (opts:) on oleg146-server 00:07:01 (1713413221) shut down Failover mds2 to oleg146-server mount facets: mds2 Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg146-server: oleg146-server.virtnet: executing set_default_debug all all pdsh@oleg146-client: oleg146-server: ssh exited with exit code 1 Started lustre-MDT0001 00:07:15 (1713413235) targets are mounted 00:07:15 (1713413235) facet_failover done oleg146-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0001-mdc-*.mds_server_uuid mdc.lustre-MDT0001-mdc-*.mds_server_uuid in FULL state after 0 sec affected facets: mds2 oleg146-server: oleg146-server.virtnet: executing _wait_recovery_complete *.lustre-MDT0001.recovery_status 40 oleg146-server: *.lustre-MDT0001.recovery_status status: COMPLETE PASS 427 (33s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 428: large block size IO should not hang == 00:07:26 (1713413246) 1+0 records in 1+0 records out 134217728 bytes (134 MB) copied, 62.7716 s, 2.1 MB/s 1+0 records in 1+0 records out 134217728 bytes (134 MB) copied, 72.8235 s, 1.8 MB/s 1+0 records in 1+0 records out 134217728 bytes (134 MB) copied, 72.8536 s, 1.8 MB/s 1+0 records in 1+0 records out 134217728 bytes (134 MB) copied, 73.2532 s, 1.8 MB/s 1+0 records in 1+0 records out 134217728 bytes (134 MB) copied, 5.83725 s, 23.0 MB/s 1+0 records in 1+0 records out 134217728 bytes (134 MB) copied, 5.89489 s, 22.8 MB/s 1+0 records in 1+0 records out 134217728 bytes (134 MB) copied, 5.92622 s, 22.6 MB/s 1+0 records in 1+0 records out 134217728 bytes (134 MB) copied, 5.96881 s, 22.5 MB/s PASS 428 (82s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 429: verify if opencache flag on client side does work ========================================================== 00:08:50 (1713413330) llite.lustre-ffff88012a451000.opencache_threshold_count=5 mdc.lustre-MDT0000-mdc-ffff88012a451000.stats=clear 1st: 2 RPCs in flight 2nd: 2 RPCs in flight 3rd: 2 RPCs in flight PASS 429 (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 430a: lseek: SEEK_DATA/SEEK_HOLE basic functionality ========================================================== 00:08:54 (1713413334) Component #1: 1M DoM, component #2: EOF, 2 stripes 1M 1+0 records in 1+0 records out 262144 bytes (262 kB) copied, 0.0102039 s, 25.7 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0431119 s, 24.3 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0368922 s, 28.4 MB/s Data at 256K...512K, 2M...3M and 4M...5M Seeking hole from 1000 ... 1000 Seeking data from 1000 ... 262144 Seeking hole from 300000 ... 524288 Seeking data from 300000 ... 300000 Seeking hole from 1000000 ... 1000000 Seeking data from 1000000 ... 2097152 Seeking hole from 1500000 ... 1500000 Seeking data from 1500000 ... 2097152 Seeking hole from 3000000 ... 3145728 Seeking data from 3000000 ... 3000000 1+0 records in 1+0 records out 655360 bytes (655 kB) copied, 0.0285657 s, 22.9 MB/s Add data block at 640K...1280K Seeking hole from 600000 ... 600000 Seeking data from 600000 ... 655360 Seeking hole from 1000000 ... 1310720 Seeking data from 1000000 ... 1000000 Seeking hole from 1200000 ... 1310720 Seeking data from 1200000 ... 1200000 Using offset > filesize ... lseek to 4000000 failed with 6 Using offset > filesize ... lseek to 4000000 failed with 6 Done Component #1: 1M, 2 stripes 64K, component #2: EOF, 2 stripes 1M 1+0 records in 1+0 records out 262144 bytes (262 kB) copied, 0.0125084 s, 21.0 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0447311 s, 23.4 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0351291 s, 29.8 MB/s Data at 256K...512K, 2M...3M and 4M...5M Seeking hole from 1000 ... 1000 Seeking data from 1000 ... 262144 Seeking hole from 300000 ... 524288 Seeking data from 300000 ... 300000 Seeking hole from 1000000 ... 1000000 Seeking data from 1000000 ... 2097152 Seeking hole from 1500000 ... 1500000 Seeking data from 1500000 ... 2097152 Seeking hole from 3000000 ... 3145728 Seeking data from 3000000 ... 3000000 1+0 records in 1+0 records out 655360 bytes (655 kB) copied, 0.0223665 s, 29.3 MB/s Add data block at 640K...1280K Seeking hole from 600000 ... 600000 Seeking data from 600000 ... 655360 Seeking hole from 1000000 ... 1310720 Seeking data from 1000000 ... 1000000 Seeking hole from 1200000 ... 1310720 Seeking data from 1200000 ... 1200000 Using offset > filesize ... lseek to 4000000 failed with 6 Using offset > filesize ... lseek to 4000000 failed with 6 Done Two stripes, stripe size 512K 1+0 records in 1+0 records out 262144 bytes (262 kB) copied, 0.012294 s, 21.3 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0378659 s, 27.7 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0403441 s, 26.0 MB/s Data at 256K...512K, 2M...3M and 4M...5M Seeking hole from 1000 ... 1000 Seeking data from 1000 ... 262144 Seeking hole from 300000 ... 524288 Seeking data from 300000 ... 300000 Seeking hole from 1000000 ... 1000000 Seeking data from 1000000 ... 2097152 Seeking hole from 1500000 ... 1500000 Seeking data from 1500000 ... 2097152 Seeking hole from 3000000 ... 3145728 Seeking data from 3000000 ... 3000000 1+0 records in 1+0 records out 655360 bytes (655 kB) copied, 0.0207779 s, 31.5 MB/s Add data block at 640K...1280K Seeking hole from 600000 ... 600000 Seeking data from 600000 ... 655360 Seeking hole from 1000000 ... 1310720 Seeking data from 1000000 ... 1000000 Seeking hole from 1200000 ... 1310720 Seeking data from 1200000 ... 1200000 Using offset > filesize ... lseek to 4000000 failed with 6 Using offset > filesize ... lseek to 4000000 failed with 6 Done Mirrored file: Component #1: 512K, stripe 64K, component #2: EOF, 2 stripes 512K Plain 2 stripes 1M 1+0 records in 1+0 records out 262144 bytes (262 kB) copied, 0.0147892 s, 17.7 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0359858 s, 29.1 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0352367 s, 29.8 MB/s Data at 256K...512K, 2M...3M and 4M...5M Seeking hole from 1000 ... 1000 Seeking data from 1000 ... 262144 Seeking hole from 300000 ... 524288 Seeking data from 300000 ... 300000 Seeking hole from 1000000 ... 1000000 Seeking data from 1000000 ... 2097152 Seeking hole from 1500000 ... 1500000 Seeking data from 1500000 ... 2097152 Seeking hole from 3000000 ... 3145728 Seeking data from 3000000 ... 3000000 1+0 records in 1+0 records out 655360 bytes (655 kB) copied, 0.0241674 s, 27.1 MB/s Add data block at 640K...1280K Seeking hole from 600000 ... 600000 Seeking data from 600000 ... 655360 Seeking hole from 1000000 ... 1310720 Seeking data from 1000000 ... 1000000 Seeking hole from 1200000 ... 1310720 Seeking data from 1200000 ... 1200000 Using offset > filesize ... lseek to 4000000 failed with 6 Using offset > filesize ... lseek to 4000000 failed with 6 Done PASS 430a (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 430b: lseek: SEEK_DATA/SEEK_HOLE special cases ========================================================== 00:08:59 (1713413339) Seeking hole from 0 ... lseek to 0 failed with 6 Seeking data from 0 ... lseek to 0 failed with 6 Seeking hole from 1000000 ... 1000000 Seeking data from 1000000 ... lseek to 1000000 failed with 6 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0535276 s, 19.6 MB/s Seeking hole from 1000000 ... 1048576 Seeking hole from 1048576 ... lseek to 1048576 failed with 6 Seeking hole from 1000000 ... 1048576 Seeking hole from 1048576 ... lseek to 1048576 failed with 6 1+0 records in 1+0 records out 1 byte (1 B) copied, 0.00254628 s, 0.4 kB/s 1+0 records in 1+0 records out 1 byte (1 B) copied, 0.00243197 s, 0.4 kB/s PASS 430b (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 430c: lseek: external tools check ========= 00:09:03 (1713413343) 1+0 records in 1+0 records out 1024 bytes (1.0 kB) copied, 0.00242386 s, 422 kB/s cp 8.22 installed cp test skipped due to 8.22 < 8.33 tar 1.26 installed tar test skipped due to 1.26 < 1.29 PASS 430c (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 431: Restart transaction for IO =========== 00:09:07 (1713413347) 1+0 records in 1+0 records out 4096 bytes (4.1 kB) copied, 0.00268521 s, 1.5 MB/s 1+0 records in 1+0 records out 4096 bytes (4.1 kB) copied, 0.00272341 s, 1.5 MB/s 1+0 records in 1+0 records out 4096 bytes (4.1 kB) copied, 0.00143293 s, 2.9 MB/s fail_loc=0x251 PASS 431 (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 432: mv dir from outside Lustre =========== 00:09:12 (1713413352) On MGS 192.168.201.146, active = nodemap.active=1 waiting 10 secs for sync On MGS 192.168.201.146, default.admin_nodemap = nodemap.default.admin_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.146, default.trusted_nodemap = nodemap.default.trusted_nodemap=1 waiting 10 secs for sync On MGS 192.168.201.146, active = nodemap.active=0 waiting 10 secs for sync PASS 432 (47s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 433: ldlm lock cancel releases dentries and inodes ========================================================== 00:10:01 (1713413401) llite.lustre-ffff88012a451000.inode_cache=0 striped dir -i1 -c2 -H fnv_1a_64 /mnt/lustre/d433.sanity total: 256 create in 0.28 seconds: 909.52 ops/second total: 256 mkdir in 0.41 seconds: 618.22 ops/second lustre_inode_cache 535 objs before lock cancel, 20 after llite.lustre-ffff88012a451000.inode_cache=1 PASS 433 (11s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 434: Client should not send RPCs for security.selinux with SElinux disabled ========================================================== 00:10:14 (1713413414) striped dir -i0 -c1 -H all_char /mnt/lustre/d434.sanity/ llite.lustre-ffff88012a451000.xattr_cache=0 PASS 434 (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 440: bash completion for lfs, lctl ======== 00:10:20 (1713413420) PASS 440 (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 442: truncate vs read/write should not panic ========================================================== 00:10:25 (1713413425) fail_loc=0x1430 PASS 442 (8s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 460d: Check encrypt pools output ========== 00:10:36 (1713413436) physical_pages: 955079 pools: PASS 460d (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 801a: write barrier user interfaces and stat machine ========================================================== 00:10:40 (1713413440) debug=-1 debug_mb=150 debug=-1 debug_mb=150 Start barrier_freeze at: Thu Apr 18 00:10:42 EDT 2024 fail_val=5 fail_loc=0x2202 Got barrier status at: Thu Apr 18 00:10:45 EDT 2024 fail_val=0 fail_loc=0 sleep 20 seconds, then the barrier will be expired Start barrier_thaw at: Thu Apr 18 00:11:07 EDT 2024 fail_val=5 fail_loc=0x2202 Got barrier status at: Thu Apr 18 00:11:09 EDT 2024 fail_val=0 fail_loc=0 fail_loc=0x2203 oleg146-server: Fail to freeze barrier for lustre: Object is remote pdsh@oleg146-client: oleg146-server: ssh exited with exit code 66 fail_loc=0 debug_mb=21 debug_mb=21 debug=trace inode super iotrace malloc cache info ioctl neterror net warning buffs other dentry nettrace page dlmtrace error emerg ha rpctrace vfstrace reada mmap config console quota sec lfsck hsm snapshot layout debug=trace inode super iotrace malloc cache info ioctl neterror net warning buffs other dentry nettrace page dlmtrace error emerg ha rpctrace vfstrace reada mmap config console quota sec lfsck hsm snapshot layout PASS 801a (35s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 801b: modification will be blocked by write barrier ========================================================== 00:11:17 (1713413477) debug=-1 debug_mb=150 debug=-1 debug_mb=150 total: 6 mkdir in 0.03 seconds: 174.33 ops/second File: '/mnt/lustre/d801b.sanity/d5' Size: 4096 Blocks: 8 IO Block: 1048576 directory Device: 2c54f966h/743766374d Inode: 162129771587706784 Links: 2 Access: (0755/drwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2024-04-18 00:11:19.000000000 -0400 Modify: 2024-04-18 00:11:19.000000000 -0400 Change: 2024-04-18 00:11:19.000000000 -0400 Birth: - PID TTY TIME CMD 7359 pts/0 00:00:00 mkdir PID TTY TIME CMD 7360 pts/0 00:00:00 touch PID TTY TIME CMD 7361 pts/0 00:00:00 ln PID TTY TIME CMD 7362 pts/0 00:00:00 mv PID TTY TIME CMD 7363 pts/0 00:00:00 rm debug_mb=21 debug_mb=21 PASS 801b (12s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 801c: rescan barrier bitmap =============== 00:11:32 (1713413492) debug=-1 debug_mb=150 debug=-1 debug_mb=150 Stopping /mnt/lustre-mds2 (opts:) on oleg146-server oleg146-server: Fail to freeze barrier for lustre: Object is remote pdsh@oleg146-client: oleg146-server: ssh exited with exit code 66 1 of 2 MDT(s) in the filesystem lustre are inactive Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg146-server: oleg146-server.virtnet: executing set_default_debug all all pdsh@oleg146-client: oleg146-server: ssh exited with exit code 1 Started lustre-MDT0001 0 of 2 MDT(s) in the filesystem lustre are inactive debug_mb=21 debug_mb=21 debug=trace inode super iotrace malloc cache info ioctl neterror net warning buffs other dentry nettrace page dlmtrace error emerg ha rpctrace vfstrace reada mmap config console quota sec lfsck hsm snapshot layout debug=trace inode super iotrace malloc cache info ioctl neterror net warning buffs other dentry nettrace page dlmtrace error emerg ha rpctrace vfstrace reada mmap config console quota sec lfsck hsm snapshot layout PASS 801c (13s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 802b: be able to set MDTs to readonly ===== 00:11:47 (1713413507) mdt.lustre-MDT0000.readonly=0 mdt.lustre-MDT0001.readonly=0 mdt.lustre-MDT0000.readonly=1 mdt.lustre-MDT0001.readonly=1 Modify should be refused touch: cannot touch '/mnt/lustre/d802b.sanity/guard': Read-only file system Read should be allowed mdt.lustre-MDT0000.readonly=0 mdt.lustre-MDT0001.readonly=0 mdt.lustre-MDT0000.readonly=0 mdt.lustre-MDT0001.readonly=0 PASS 802b (5s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 803a: verify agent object for remote object ========================================================== 00:11:54 (1713413514) Waiting for MDT destroys to complete before create: UUID Inodes IUsed IFree IUse% Mounted on lustre-MDT0000_UUID 1024000 9155 1014845 1% /mnt/lustre[MDT:0] lustre-MDT0001_UUID 1024000 7559 1016441 1% /mnt/lustre[MDT:1] lustre-OST0000_UUID 262144 14779 247365 6% /mnt/lustre[OST:0] lustre-OST0001_UUID 262144 14674 247470 6% /mnt/lustre[OST:1] filesystem_summary: 511549 16714 494835 4% /mnt/lustre after create: UUID Inodes IUsed IFree IUse% Mounted on lustre-MDT0000_UUID 1024000 9165 1014835 1% /mnt/lustre[MDT:0] lustre-MDT0001_UUID 1024000 7569 1016431 1% /mnt/lustre[MDT:1] lustre-OST0000_UUID 262144 14779 247365 6% /mnt/lustre[OST:0] lustre-OST0001_UUID 262144 14674 247470 6% /mnt/lustre[OST:1] filesystem_summary: 511569 16734 494835 4% /mnt/lustre Waiting for MDT destroys to complete after unlink: UUID Inodes IUsed IFree IUse% Mounted on lustre-MDT0000_UUID 1024000 9155 1014845 1% /mnt/lustre[MDT:0] lustre-MDT0001_UUID 1024000 7559 1016441 1% /mnt/lustre[MDT:1] lustre-OST0000_UUID 262144 14779 247365 6% /mnt/lustre[OST:0] lustre-OST0001_UUID 262144 14674 247470 6% /mnt/lustre[OST:1] filesystem_summary: 511549 16714 494835 4% /mnt/lustre PASS 803a (14s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 803b: remote object can getattr from cache ========================================================== 00:12:10 (1713413530) PASS 803b (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 804: verify agent entry for remote entry == 00:12:17 (1713413537) oleg146-server: debugfs 1.46.2.wc5 (26-Mar-2022) oleg146-server: /dev/mapper/mds2_flakey: catastrophic mode - not reading inode or group bitmaps 512004 (12) . 2 (4084) .. 512145 (32) 0x240000404:0x7e4:0x0 512206 (32) 0x240000406:0x1ccc:0x0 64010 (32) 0x240000401:0x16a:0x0 96005 (32) 0x240000401:0x170:0x0 128003 (32) 0x240000401:0x178:0x0 128010 (32) 0x240000401:0x17b:0x0 128014 (32) 0x240000401:0x17d:0x0 160001 (32) 0x240000401:0x183:0x0 192017 (28) 0x2400013a0:0xd:0x0 256004 (28) 0x2400013a0:0x28:0x0 128021 (28) 0x2400013a0:0x2f:0x0 192021 (28) 0x2400013a2:0x14:0x0 256011 (28) 0x2400013a2:0x1a:0x0 256014 (28) 0x2400013a2:0x1d:0x0 256016 (28) 0x2400013a4:0x5:0x0 288007 (28) 0x240001b73:0x26:0x0 128027 (28) 0x240001b73:0x34:0x0 192028 (28) 0x240001b73:0x4b:0x0 32029 (28) 0x240001b73:0x5b:0x0 160029 (28) 0x240001b73:0x5f:0x0 512447 (28) 0x240001b73:0x66:0x0 96030 (48) 0x240001b72:0x1e:0x0 514519 (48) 0x240001b78:0x5a87:0x0 514530 (32) 0x240001b78:0x5a91:0x0 514531 (32) 0x240001b78:0x5a92:0x0 514545 (32) 0x240001b78:0x5aa0:0x0 514561 (32) 0x240001b78:0x5ab0:0x0 514731 (32) 0x240001b78:0x5b0d:0x0 514575 (28) 0x240001b79:0x7:0x0 514671 (32) 0x240001b78:0x5ad1:0x0 514714 (32) 0x240001b78:0x5afc:0x0 514732 (32) 0x240001b78:0x5b0e:0x0 514726 (32) 0x240001b78:0x5b08:0x0 514665 (32) 0x240001b78:0x5acb:0x0 514973 (28) 0x240001b79:0x9b:0x0 515000 (28) 0x240001b79:0xa4:0x0 515182 (28) 0x240001b79:0xcd:0x0 515202 (28) 0x240001b7b:0x7:0x0 515224 (28) 0x240001b7b:0x15:0x0 515262 (28) 0x240002b10:0x21:0x0 515422 (28) 0x240002b13:0x79:0x0 515331 (28) 0x240002b13:0x1e:0x0 515343 (28) 0x240002b13:0x2a:0x0 515489 (28) 0x240002b13:0x9d:0x0 515354 (28) 0x240002b13:0x35:0x0 515335 (28) 0x240002b13:0x22:0x0 515441 (28) 0x240002b13:0x85:0x0 515384 (28) 0x240002b13:0x53:0x0 515471 (28) 0x240002b13:0x94:0x0 515371 (28) 0x240002b13:0x46:0x0 515345 (28) 0x240002b13:0x2c:0x0 515410 (28) 0x240002b13:0x6d:0x0 515385 (28) 0x240002b13:0x54:0x0 515472 (48) 0x240002b13:0xfc:0x0 3253 (48) 0x240002b13:0xea:0x0 515468 (48) 0x240002b13:0xfa:0x0 3208 (48) 0x240002b13:0xbd:0x0 515539 (48) 0x240002b13:0x11d:0x0 3211 (48) 0x240002b13:0xc0:0x0 32046 (48) 0x240002b13:0x714:0x0 32048 (32) 0x240002b13:0x715:0x0 32093 (28) 0x240002b15:0x26:0x0 32102 (28) 0x240002b15:0x2f:0x0 32117 (28) 0x240002b15:0x3e:0x0 32138 (28) 0x240002b15:0x53:0x0 32140 (28) 0x240002b15:0x55:0x0 32146 (28) 0x240002b15:0x5b:0x0 32153 (28) 0x240002b15:0x62:0x0 32157 (28) 0x240002b15:0x66:0x0 32202 (32) 0x240002b13:0x13cf:0x0 32215 (32) 0x240002b13:0x1e80:0x0 32218 (32) 0x240002b13:0x1e95:0x0 32226 (1812) 0x240002b13:0x1fb2:0x0 64012 (32) 0x240000401:0x16b:0x0 192010 (28) 0x2400013a0:0x6:0x0 192001 (32) 0x240000401:0x18f:0x0 512108 (48) 0x240000404:0x2:0x0 32011 (48) 0x240000406:0x5a:0x0 512213 (48) 0x240000407:0x1:0x0 32013 (32) 0x240000401:0x15e:0x0 192006 (32) 0x240000401:0x192:0x0 512148 (32) 0x240000401:0x14c:0x0 224018 (28) 0x2400013a0:0x22:0x0 224020 (28) 0x2400013a0:0x24:0x0 256001 (28) 0x2400013a0:0x25:0x0 256003 (28) 0x2400013a0:0x27:0x0 64023 (28) 0x2400013a2:0x28:0x0 512369 (28) 0x240001b72:0xf:0x0 160023 (28) 0x240001b73:0x5:0x0 64026 (28) 0x240001b73:0x1d:0x0 160026 (28) 0x240001b73:0x20:0x0 224026 (28) 0x240001b73:0x22:0x0 288013 (28) 0x240001b73:0x2c:0x0 320001 (28) 0x240001b73:0x51:0x0 512432 (28) 0x240001b73:0x57:0x0 512440 (28) 0x240001b73:0x61:0x0 64030 (28) 0x240001b73:0x6c:0x0 224029 (28) 0x240001b72:0x62:0x0 512458 (32) 0x240001b74:0x1b5f:0x0 512483 (28) 0x240001b73:0x6d:0x0 32036 (48) 0x240001b78:0x569c:0x0 514748 (32) 0x240001b78:0x5b1e:0x0 514672 (32) 0x240001b78:0x5ad2:0x0 514573 (32) 0x240001b78:0x5abc:0x0 514715 (32) 0x240001b78:0x5afd:0x0 514749 (32) 0x240001b78:0x5b1f:0x0 514673 (32) 0x240001b78:0x5ad3:0x0 514783 (28) 0x240001b79:0x92:0x0 514905 (28) 0x240001b79:0x8b:0x0 514999 (28) 0x240001b79:0xa3:0x0 515138 (28) 0x240001b79:0xc2:0x0 515163 (28) 0x240001b79:0xc9:0x0 515243 (32) 0x240001b7b:0x7fd:0x0 515250 (28) 0x240002b10:0x10:0x0 515252 (32) 0x240002b11:0x9aa:0x0 515355 (28) 0x240002b13:0x36:0x0 515392 (28) 0x240002b13:0x5b:0x0 515386 (28) 0x240002b13:0x55:0x0 515390 (28) 0x240002b13:0x59:0x0 515398 (28) 0x240002b13:0x61:0x0 515381 (28) 0x240002b13:0x50:0x0 515336 (28) 0x240002b13:0x23:0x0 515460 (48) 0x240002b13:0xf7:0x0 3252 (48) 0x240002b13:0xe9:0x0 515476 (48) 0x240002b13:0xfe:0x0 515482 (48) 0x240002b13:0x100:0x0 3249 (48) 0x240002b13:0xe6:0x0 515537 (48) 0x240002b13:0x11b:0x0 515450 (48) 0x240002b13:0xf3:0x0 32070 (28) 0x240002b15:0xf:0x0 32071 (28) 0x240002b15:0x10:0x0 32084 (28) 0x240002b15:0x1d:0x0 32085 (28) 0x240002b15:0x1e:0x0 32103 (28) 0x240002b15:0x30:0x0 32107 (28) 0x240002b15:0x34:0x0 32123 (28) 0x240002b15:0x44:0x0 32125 (28) 0x240002b15:0x46:0x0 32137 (28) 0x240002b15:0x52:0x0 32160 (28) 0x240002b15:0x69:0x0 32189 (1972) 0x240002b15:0x86:0x0 96026 (28) 0x240001b73:0x1e:0x0 224006 (28) 0x2400013a0:0x16:0x0 224013 (28) 0x2400013a0:0x1d:0x0 128008 (32) 0x240000401:0x17a:0x0 512451 (28) 0x240001b73:0x69:0x0 192020 (28) 0x2400013a0:0x10:0x0 512420 (28) 0x240001b73:0x45:0x0 96007 (32) 0x240000401:0x171:0x0 192008 (28) 0x2400013a0:0x4:0x0 512434 (28) 0x240001b73:0x59:0x0 160028 (28) 0x240001b73:0x4a:0x0 288021 (28) 0x240001b73:0x3e:0x0 224015 (28) 0x2400013a0:0x1f:0x0 256008 (28) 0x2400013a0:0x2c:0x0 512159 (32) 0x240000401:0x14e:0x0 512468 (32) 0x240001b74:0x1b6f:0x0 512471 (32) 0x240001b74:0x1b76:0x0 512472 (32) 0x240001b74:0x1b78:0x0 514532 (32) 0x240001b78:0x5a93:0x0 514656 (32) 0x240001b78:0x5ac2:0x0 514655 (32) 0x240001b78:0x5ac1:0x0 514693 (32) 0x240001b78:0x5ae7:0x0 514694 (32) 0x240001b78:0x5ae8:0x0 514721 (32) 0x240001b78:0x5b03:0x0 514683 (32) 0x240001b78:0x5add:0x0 514844 (28) 0x240001b79:0x76:0x0 514874 (28) 0x240001b79:0x7f:0x0 514953 (28) 0x240001b79:0x94:0x0 515021 (28) 0x240001b79:0xa7:0x0 514861 (28) 0x240001b79:0x7a:0x0 514811 (28) 0x240001b79:0x82:0x0 514879 (28) 0x240001b79:0x84:0x0 514880 (28) 0x240001b79:0x85:0x0 514914 (28) 0x240001b79:0x8c:0x0 514934 (28) 0x240001b79:0x91:0x0 515028 (28) 0x240001b79:0xa8:0x0 515036 (28) 0x240001b79:0xad:0x0 515051 (28) 0x240001b79:0xb3:0x0 515083 (28) 0x240001b79:0xb8:0x0 515206 (28) 0x240001b7b:0x9:0x0 515239 (28) 0x240001b7b:0x23:0x0 515272 (28) 0x240002b10:0x1d:0x0 515286 (28) 0x240002b13:0xe:0x0 515290 (28) 0x240002b13:0x10:0x0 515328 (28) 0x240002b13:0x1b:0x0 515364 (28) 0x240002b13:0x3f:0x0 515515 (28) 0x240002b13:0xaa:0x0 515499 (28) 0x240002b13:0xa2:0x0 515382 (28) 0x240002b13:0x51:0x0 515366 (28) 0x240002b13:0x41:0x0 515421 (28) 0x240002b13:0x78:0x0 515341 (28) 0x240002b13:0x28:0x0 515479 (28) 0x240002b13:0x98:0x0 515407 (28) 0x240002b13:0x6a:0x0 515502 (48) 0x240002b13:0x108:0x0 3257 (48) 0x240002b13:0xee:0x0 515538 (48) 0x240002b13:0x11c:0x0 3238 (48) 0x240002b13:0xdb:0x0 3234 (48) 0x240002b13:0xd7:0x0 32056 (28) 0x240002b15:0x1:0x0 32104 (28) 0x240002b15:0x31:0x0 32105 (28) 0x240002b15:0x32:0x0 32112 (28) 0x240002b15:0x39:0x0 32130 (28) 0x240002b15:0x4b:0x0 32162 (28) 0x240002b15:0x6b:0x0 32183 (28) 0x240002b15:0x80:0x0 32191 (28) 0x240002b15:0x88:0x0 32203 (2068) 0x240002b13:0x13d2:0x0 512419 (28) 0x240001b73:0x44:0x0 192013 (28) 0x2400013a0:0x9:0x0 224027 (28) 0x240001b73:0x37:0x0 96023 (28) 0x240001b73:0x1:0x0 32009 (32) 0x240000401:0x153:0x0 32023 (28) 0x240001b70:0x1:0x0 512150 (48) 0x240000404:0x7e7:0x0 288028 (28) 0x240001b73:0x50:0x0 288001 (28) 0x240001b73:0x18:0x0 512114 (48) 0x240000404:0x4:0x0 192019 (28) 0x2400013a0:0xf:0x0 224007 (28) 0x2400013a0:0x17:0x0 160030 (48) 0x240001b72:0x5f:0x0 512490 (28) 0x240001b73:0x74:0x0 512496 (32) 0x240001b74:0x1b88:0x0 512517 (32) 0x240001b77:0x2ee2:0x0 514557 (32) 0x240001b78:0x5aac:0x0 514733 (32) 0x240001b78:0x5b0f:0x0 514742 (32) 0x240001b78:0x5b18:0x0 514744 (32) 0x240001b78:0x5b1a:0x0 514730 (32) 0x240001b78:0x5b0c:0x0 514777 (48) 0x240001b78:0x5bd2:0x0 514791 (28) 0x240001b79:0x9a:0x0 515093 (28) 0x240001b79:0xbc:0x0 515105 (28) 0x240001b79:0xbf:0x0 515258 (32) 0x240002b11:0x9d4:0x0 515266 (28) 0x240002b10:0x19:0x0 515291 (28) 0x240002b13:0x15:0x0 515306 (28) 0x240002b10:0x32:0x0 515487 (28) 0x240002b13:0x9c:0x0 515395 (28) 0x240002b13:0x5e:0x0 515342 (28) 0x240002b13:0x29:0x0 515495 (28) 0x240002b13:0xa0:0x0 515362 (28) 0x240002b13:0x3d:0x0 515477 (28) 0x240002b13:0x97:0x0 515363 (28) 0x240002b13:0x3e:0x0 515509 (28) 0x240002b13:0xa7:0x0 515534 (48) 0x240002b13:0x118:0x0 3251 (48) 0x240002b13:0xe8:0x0 3213 (48) 0x240002b13:0xc2:0x0 515500 (48) 0x240002b13:0x107:0x0 32050 (32) 0x240002b13:0x729:0x0 32072 (28) 0x240002b15:0x11:0x0 32073 (28) 0x240002b15:0x12:0x0 32074 (28) 0x240002b15:0x13:0x0 32078 (28) 0x240002b15:0x17:0x0 32082 (28) 0x240002b15:0x1b:0x0 32113 (28) 0x240002b15:0x3a:0x0 32133 (28) 0x240002b15:0x4e:0x0 32148 (28) 0x240002b15:0x5d:0x0 32156 (76) 0x240002b15:0x65:0x0 32221 (2420) 0x240002b13:0xb0f:0x0 515048 (28) 0x240001b79:0xb0:0x0 514814 (28) 0x240001b79:0x86:0x0 96003 (32) 0x240000401:0x16f:0x0 514664 (32) 0x240001b78:0x5aca:0x0 224016 (28) 0x2400013a0:0x20:0x0 514663 (32) 0x240001b78:0x5ac9:0x0 515050 (28) 0x240001b79:0xb2:0x0 514735 (32) 0x240001b78:0x5b11:0x0 514679 (32) 0x240001b78:0x5ad9:0x0 160005 (32) 0x240000401:0x185:0x0 64015 (32) 0x240000401:0x16d:0x0 32030 (28) 0x240001b73:0x6b:0x0 192007 (28) 0x2400013a0:0x1:0x0 64007 (32) 0x240000401:0x168:0x0 514653 (32) 0x240001b78:0x5abf:0x0 64022 (28) 0x240001b70:0x4:0x0 512149 (32) 0x240000401:0x14d:0x0 64017 (32) 0x240000401:0x16e:0x0 514660 (32) 0x240001b78:0x5ac6:0x0 512487 (28) 0x240001b73:0x71:0x0 64014 (32) 0x240000401:0x16c:0x0 128024 (28) 0x240001b72:0xb:0x0 512488 (28) 0x240001b73:0x72:0x0 256017 (28) 0x2400013a2:0x20:0x0 514678 (32) 0x240001b78:0x5ad8:0x0 512491 (32) 0x240001b74:0x1b83:0x0 512224 (32) 0x240000401:0x162:0x0 514722 (32) 0x240001b78:0x5b04:0x0 514990 (32) 0x240001b78:0x5cba:0x0 128006 (32) 0x240000401:0x179:0x0 515285 (28) 0x240002b10:0x24:0x0 515292 (28) 0x240002b10:0x31:0x0 515447 (28) 0x240002b13:0x88:0x0 515388 (28) 0x240002b13:0x57:0x0 515387 (28) 0x240002b13:0x56:0x0 515469 (28) 0x240002b13:0x93:0x0 515433 (28) 0x240002b13:0x81:0x0 515401 (28) 0x240002b13:0x64:0x0 515455 (28) 0x240002b13:0x8c:0x0 515457 (28) 0x240002b13:0x8d:0x0 515406 (28) 0x240002b13:0x69:0x0 3239 (48) 0x240002b13:0xdc:0x0 3230 (48) 0x240002b13:0xd3:0x0 3218 (48) 0x240002b13:0xc7:0x0 515492 (48) 0x240002b13:0x104:0x0 3212 (48) 0x240002b13:0xc1:0x0 3228 (48) 0x240002b13:0xd1:0x0 515494 (48) 0x240002b13:0x105:0x0 515540 (48) 0x240002b13:0x11e:0x0 515524 (48) 0x240002b13:0x112:0x0 515486 (48) 0x240002b13:0x101:0x0 515541 (48) 0x240002b13:0x11f:0x0 3235 (48) 0x240002b13:0xd8:0x0 32042 (32) 0x240002b13:0x12d:0x0 32087 (28) 0x240002b15:0x20:0x0 32116 (28) 0x240002b15:0x3d:0x0 32136 (28) 0x240002b15:0x51:0x0 32139 (28) 0x240002b15:0x54:0x0 32142 (28) 0x240002b15:0x57:0x0 32144 (28) 0x240002b15:0x59:0x0 32167 (28) 0x240002b15:0x70:0x0 32178 (28) 0x240002b15:0x7b:0x0 32188 (28) 0x240002b15:0x85:0x0 32192 (28) 0x240002b15:0x89:0x0 32222 (1984) 0x240002b13:0x1fa6:0x0 514702 (32) 0x240001b78:0x5af0:0x0 514773 (48) 0x240001b78:0x5bd0:0x0 514729 (32) 0x240001b78:0x5b0b:0x0 32028 (28) 0x240001b73:0x46:0x0 514691 (32) 0x240001b78:0x5ae5:0x0 512442 (28) 0x240001b73:0x62:0x0 64031 (32) 0x240001b74:0x1baf:0x0 192023 (28) 0x240001b73:0x9:0x0 514790 (28) 0x240001b79:0x87:0x0 288003 (28) 0x240001b73:0x1a:0x0 514699 (32) 0x240001b78:0x5aed:0x0 64021 (28) 0x2400013a2:0x4:0x0 514549 (32) 0x240001b78:0x5aa4:0x0 288026 (28) 0x240001b73:0x4e:0x0 128020 (32) 0x240000401:0x18b:0x0 512429 (28) 0x240001b73:0x55:0x0 514690 (32) 0x240001b78:0x5ae4:0x0 514666 (32) 0x240001b78:0x5acc:0x0 160019 (32) 0x240000401:0x18e:0x0 224008 (28) 0x2400013a0:0x18:0x0 288025 (28) 0x240001b73:0x42:0x0 514540 (32) 0x240001b78:0x5a9b:0x0 224001 (28) 0x2400013a0:0x11:0x0 32005 (28) 0x240000401:0x2:0x0 32027 (28) 0x240001b73:0x31:0x0 514669 (32) 0x240001b78:0x5acf:0x0 192026 (28) 0x240001b73:0x21:0x0 512129 (48) 0x240000404:0x9:0x0 160017 (32) 0x240000401:0x18d:0x0 128015 (32) 0x240000401:0x17e:0x0 320003 (28) 0x240001b73:0x53:0x0 514707 (32) 0x240001b78:0x5af5:0x0 515053 (28) 0x240001b79:0xb5:0x0 515104 (28) 0x240001b79:0xbe:0x0 515147 (28) 0x240001b79:0xc5:0x0 515159 (28) 0x240001b79:0xc8:0x0 515194 (32) 0x240001b7a:0x138a:0x0 515209 (28) 0x240001b79:0xd5:0x0 515237 (28) 0x240001b7b:0x21:0x0 515271 (28) 0x240002b10:0x1c:0x0 515287 (28) 0x240002b13:0xf:0x0 515289 (28) 0x240002b10:0x27:0x0 515307 (28) 0x240002b10:0x33:0x0 32040 (48) 0x240002b13:0xb1:0x0 515397 (28) 0x240002b13:0x60:0x0 515412 (28) 0x240002b13:0x6f:0x0 515359 (28) 0x240002b13:0x3a:0x0 515485 (28) 0x240002b13:0x9b:0x0 515370 (28) 0x240002b13:0x45:0x0 515334 (28) 0x240002b13:0x21:0x0 515419 (28) 0x240002b13:0x76:0x0 515507 (28) 0x240002b13:0xa6:0x0 515333 (28) 0x240002b13:0x20:0x0 515529 (48) 0x240002b13:0x113:0x0 3236 (48) 0x240002b13:0xd9:0x0 3233 (48) 0x240002b13:0xd6:0x0 3250 (48) 0x240002b13:0xe7:0x0 3217 (48) 0x240002b13:0xc6:0x0 515474 (48) 0x240002b13:0xfd:0x0 515547 (48) 0x240002b13:0x51e:0x0 515550 (48) 0x240002b13:0x520:0x0 32055 (32) 0x240002b13:0xb7a:0x0 32066 (28) 0x240002b15:0xb:0x0 32068 (28) 0x240002b15:0xd:0x0 32076 (28) 0x240002b15:0x15:0x0 32081 (28) 0x240002b15:0x1a:0x0 32152 (28) 0x240002b15:0x61:0x0 32173 (28) 0x240002b15:0x76:0x0 32182 (1904) 0x240002b15:0x7f:0x0 160027 (28) 0x240001b73:0x35:0x0 512144 (32) 0x240000404:0x7e3:0x0 128025 (28) 0x240001b72:0x18:0x0 514877 (28) 0x240001b79:0x81:0x0 514667 (32) 0x240001b78:0x5acd:0x0 256012 (28) 0x2400013a2:0x1c:0x0 514674 (32) 0x240001b78:0x5ad4:0x0 514900 (28) 0x240001b79:0x88:0x0 514717 (32) 0x240001b78:0x5aff:0x0 515151 (28) 0x240001b79:0xc6:0x0 514521 (48) 0x240001b78:0x5a8a:0x0 514772 (28) 0x240001b79:0x7e:0x0 514560 (32) 0x240001b78:0x5aaf:0x0 515143 (28) 0x240001b79:0xc4:0x0 514711 (32) 0x240001b78:0x5af9:0x0 512433 (28) 0x240001b73:0x58:0x0 515031 (28) 0x240001b79:0xab:0x0 224014 (28) 0x2400013a0:0x1e:0x0 514747 (32) 0x240001b78:0x5b1d:0x0 32008 (32) 0x240000401:0x152:0x0 512404 (28) 0x240001b73:0x30:0x0 514654 (32) 0x240001b78:0x5ac0:0x0 224024 (28) 0x240001b73:0xe:0x0 512228 (32) 0x240000401:0x163:0x0 514901 (28) 0x240001b79:0x89:0x0 514736 (32) 0x240001b78:0x5b12:0x0 192012 (28) 0x2400013a0:0x8:0x0 514687 (32) 0x240001b78:0x5ae1:0x0 32031 (32) 0x240001b74:0x1ba4:0x0 256013 (28) 0x2400013a4:0x4:0x0 512444 (28) 0x240001b73:0x64:0x0 514709 (32) 0x240001b78:0x5af7:0x0 512498 (32) 0x240001b74:0x1b9c:0x0 224010 (28) 0x2400013a0:0x1a:0x0 512111 (48) 0x240000404:0x3:0x0 514708 (32) 0x240001b78:0x5af6:0x0 288017 (28) 0x240001b73:0x3a:0x0 515188 (28) 0x240001b79:0xd0:0x0 515192 (32) 0x240001b78:0x5cbc:0x0 515249 (28) 0x240002b10:0x5:0x0 515274 (28) 0x240002b10:0x1e:0x0 515265 (28) 0x240002b10:0x22:0x0 515332 (28) 0x240002b13:0x1f:0x0 515415 (28) 0x240002b13:0x72:0x0 515349 (28) 0x240002b13:0x30:0x0 515427 (28) 0x240002b13:0x7e:0x0 515394 (28) 0x240002b13:0x5d:0x0 515346 (28) 0x240002b13:0x2d:0x0 515396 (28) 0x240002b13:0x5f:0x0 515542 (48) 0x240002b13:0x120:0x0 3241 (48) 0x240002b13:0xde:0x0 3240 (48) 0x240002b13:0xdd:0x0 515520 (48) 0x240002b13:0x110:0x0 515448 (48) 0x240002b13:0xf2:0x0 515498 (48) 0x240002b13:0x106:0x0 32101 (28) 0x240002b15:0x2e:0x0 32114 (28) 0x240002b15:0x3b:0x0 32149 (28) 0x240002b15:0x5e:0x0 32164 (28) 0x240002b15:0x6d:0x0 32169 (28) 0x240002b15:0x72:0x0 32219 (2188) 0x240002b13:0x1e97:0x0 514692 (32) 0x240001b78:0x5ae6:0x0 64003 (32) 0x240000401:0x166:0x0 514754 (48) 0x240001b78:0x5bc2:0x0 512162 (32) 0x240000401:0x151:0x0 514552 (32) 0x240001b78:0x5aa7:0x0 192027 (28) 0x240001b73:0x36:0x0 514758 (48) 0x240001b78:0x5bc3:0x0 515167 (28) 0x240001b79:0xca:0x0 514831 (28) 0x240001b79:0x71:0x0 514794 (28) 0x240001b79:0x74:0x0 64024 (28) 0x240001b72:0x1:0x0 514741 (32) 0x240001b78:0x5b17:0x0 514657 (32) 0x240001b78:0x5ac3:0x0 514740 (32) 0x240001b78:0x5b16:0x0 256023 (28) 0x240001b73:0x15:0x0 192024 (28) 0x240001b73:0xa:0x0 514972 (28) 0x240001b79:0x98:0x0 192014 (28) 0x2400013a0:0xa:0x0 96011 (32) 0x240000401:0x173:0x0 256002 (28) 0x2400013a0:0x26:0x0 514670 (32) 0x240001b78:0x5ad0:0x0 515044 (28) 0x240001b79:0xaf:0x0 224005 (28) 0x2400013a0:0x15:0x0 32038 (48) 0x240001b78:0x569e:0x0 514563 (32) 0x240001b78:0x5ab2:0x0 288005 (28) 0x240001b73:0x24:0x0 256024 (28) 0x240001b73:0x16:0x0 160021 (28) 0x2400013a0:0x30:0x0 128001 (32) 0x240000401:0x177:0x0 32010 (32) 0x240000401:0x154:0x0 256021 (28) 0x240001b73:0x13:0x0 514724 (32) 0x240001b78:0x5b06:0x0 514682 (32) 0x240001b78:0x5adc:0x0 512470 (32) 0x240001b74:0x1b74:0x0 515435 (28) 0x240002b13:0x82:0x0 515513 (28) 0x240002b13:0xa9:0x0 515418 (28) 0x240002b13:0x75:0x0 3220 (48) 0x240002b13:0xc9:0x0 515488 (48) 0x240002b13:0x102:0x0 3216 (48) 0x240002b13:0xc5:0x0 32044 (32) 0x240002b13:0x131:0x0 32054 (32) 0x240002b13:0xb77:0x0 32060 (28) 0x240002b15:0x5:0x0 32064 (28) 0x240002b15:0x9:0x0 32075 (28) 0x240002b15:0x14:0x0 32079 (28) 0x240002b15:0x18:0x0 32090 (28) 0x240002b15:0x23:0x0 32118 (28) 0x240002b15:0x3f:0x0 32132 (28) 0x240002b15:0x4d:0x0 32145 (28) 0x240002b15:0x5a:0x0 32150 (28) 0x240002b15:0x5f:0x0 32170 (28) 0x240002b15:0x73:0x0 32179 (28) 0x240002b15:0x7c:0x0 32193 (28) 0x240002b15:0x8a:0x0 32205 (48) 0x240002b13:0x13d6:0x0 32213 (48) 0x240002b13:0x1df7:0x0 32224 (2300) 0x240002b10:0x96:0x0 3207 (48) 0x240002b13:0xbc:0x0 514931 (28) 0x240001b79:0x8f:0x0 514652 (28) 0x240001b7b:0x12:0x0 515530 (48) 0x240002b13:0x114:0x0 515365 (28) 0x240002b13:0x40:0x0 515193 (28) 0x240001b7a:0x1:0x0 515405 (28) 0x240002b13:0x68:0x0 3226 (48) 0x240002b13:0xcf:0x0 515310 (28) 0x240002b10:0x36:0x0 64032 (32) 0x240001b74:0x1bb1:0x0 515312 (28) 0x240002b10:0x37:0x0 514866 (28) 0x240001b79:0x7c:0x0 160012 (32) 0x240000401:0x189:0x0 256028 (28) 0x240001b73:0x4d:0x0 514751 (32) 0x240001b78:0x5b21:0x0 64028 (28) 0x240001b73:0x47:0x0 32067 (32) 0x240002b15:0xc:0x0 515269 (28) 0x240002b10:0x1b:0x0 514815 (28) 0x240001b79:0x8e:0x0 515142 (28) 0x240001b79:0xc3:0x0 515187 (28) 0x240001b79:0xcf:0x0 96016 (32) 0x240000401:0x176:0x0 515508 (48) 0x240002b13:0x10a:0x0 514684 (32) 0x240001b78:0x5ade:0x0 3258 (28) 0x240002b13:0xef:0x0 512160 (32) 0x240000401:0x14f:0x0 515049 (28) 0x240001b79:0xb1:0x0 514701 (32) 0x240001b78:0x5aef:0x0 515010 (28) 0x240001b79:0xa5:0x0 515253 (32) 0x240002b11:0x9b3:0x0 514551 (32) 0x240001b78:0x5aa6:0x0 515510 (48) 0x240002b13:0x10b:0x0 224012 (28) 0x2400013a0:0x1c:0x0 515338 (28) 0x240002b13:0x25:0x0 64027 (28) 0x240001b73:0x32:0x0 96025 (28) 0x240001b72:0x16:0x0 224009 (28) 0x2400013a0:0x19:0x0 64029 (28) 0x240001b73:0x5c:0x0 128023 (28) 0x240001b72:0xa:0x0 515380 (28) 0x240002b13:0x4f:0x0 256015 (28) 0x2400013a2:0x1f:0x0 3242 (48) 0x240002b13:0xdf:0x0 514695 (32) 0x240001b78:0x5ae9:0x0 512328 (32) 0x240000401:0x196:0x0 128026 (28) 0x240001b73:0x1f:0x0 512156 (48) 0x240000404:0x7e9:0x0 515467 (28) 0x240002b13:0x92:0x0 515481 (28) 0x240002b13:0x99:0x0 515446 (48) 0x240002b13:0xf1:0x0 514819 (28) 0x240001b79:0x99:0x0 514734 (32) 0x240001b78:0x5b10:0x0 515459 (28) 0x240002b13:0x8e:0x0 515029 (28) 0x240001b79:0xa9:0x0 514659 (32) 0x240001b78:0x5ac5:0x0 512327 (32) 0x240000401:0x195:0x0 515379 (28) 0x240002b13:0x4e:0x0 512492 (48) 0x240001b74:0x1b86:0x0 128022 (28) 0x240001b72:0x8:0x0 512436 (28) 0x240001b73:0x5a:0x0 515222 (28) 0x240001b7b:0x14:0x0 515466 (48) 0x240002b13:0xf9:0x0 515091 (28) 0x240001b79:0xba:0x0 515245 (32) 0x240001b7b:0x801:0x0 32088 (28) 0x240002b15:0x21:0x0 32097 (28) 0x240002b15:0x2a:0x0 32161 (28) 0x240002b15:0x6a:0x0 32165 (28) 0x240002b15:0x6e:0x0 32168 (28) 0x240002b15:0x71:0x0 32196 (32) 0x240002b13:0xc67:0x0 32209 (1896) 0x240002b13:0x1ce9:0x0 3214 (48) 0x240002b13:0xc3:0x0 514547 (32) 0x240001b78:0x5aa2:0x0 514713 (32) 0x240001b78:0x5afb:0x0 256010 (28) 0x2400013a4:0x1:0x0 160014 (32) 0x240000401:0x18c:0x0 288022 (28) 0x240001b73:0x3f:0x0 256007 (28) 0x2400013a0:0x2b:0x0 515361 (28) 0x240002b13:0x3c:0x0 514698 (32) 0x240001b78:0x5aec:0x0 514697 (32) 0x240001b78:0x5aeb:0x0 2203 (48) 0x240002b13:0xbb:0x0 515190 (28) 0x240001b79:0xd2:0x0 515579 (32) 0x240002b13:0x84c:0x0 512147 (32) 0x240000404:0x7e6:0x0 512460 (32) 0x240001b74:0x1b60:0x0 192016 (28) 0x2400013a0:0xc:0x0 288012 (28) 0x240001b73:0x2b:0x0 514725 (32) 0x240001b78:0x5b07:0x0 514770 (48) 0x240001b78:0x5c7f:0x0 515456 (48) 0x240002b13:0xf6:0x0 515347 (28) 0x240002b13:0x2e:0x0 515330 (28) 0x240002b13:0x1d:0x0 515308 (28) 0x240002b10:0x34:0x0 515445 (28) 0x240002b13:0x87:0x0 32061 (32) 0x240002b15:0x6:0x0 514789 (28) 0x240001b79:0x90:0x0 515351 (28) 0x240002b13:0x32:0x0 32077 (28) 0x240002b15:0x16:0x0 32091 (36) 0x240002b15:0x24:0x0 514700 (32) 0x240001b78:0x5aee:0x0 515329 (28) 0x240002b13:0x1c:0x0 256006 (28) 0x2400013a0:0x2a:0x0 192009 (28) 0x2400013a0:0x5:0x0 515416 (28) 0x240002b13:0x73:0x0 515367 (28) 0x240002b13:0x42:0x0 224028 (28) 0x240001b73:0x4c:0x0 512325 (32) 0x240000401:0x193:0x0 515350 (28) 0x240002b13:0x31:0x0 514562 (32) 0x240001b78:0x5ab1:0x0 514716 (32) 0x240001b78:0x5afe:0x0 32032 (48) 0x240001b74:0x1bae:0x0 514688 (32) 0x240001b78:0x5ae2:0x0 256020 (28) 0x240001b73:0x12:0x0 515554 (32) 0x240002b13:0x711:0x0 515054 (28) 0x240001b79:0xb6:0x0 288024 (28) 0x240001b73:0x41:0x0 160007 (32) 0x240000401:0x186:0x0 514559 (32) 0x240001b78:0x5aae:0x0 515518 (48) 0x240002b13:0x10f:0x0 96027 (28) 0x240001b73:0x33:0x0 515483 (28) 0x240002b13:0x9a:0x0 515552 (32) 0x240002b13:0x69c:0x0 3222 (48) 0x240002b13:0xcb:0x0 515348 (28) 0x240002b13:0x2f:0x0 514839 (28) 0x240001b79:0x73:0x0 515174 (28) 0x240001b79:0xcb:0x0 515357 (28) 0x240002b13:0x38:0x0 515403 (28) 0x240002b13:0x66:0x0 515461 (28) 0x240002b13:0x8f:0x0 515478 (48) 0x240002b13:0xff:0x0 515426 (28) 0x240002b13:0x7d:0x0 3247 (48) 0x240002b13:0xe4:0x0 32124 (28) 0x240002b15:0x45:0x0 32126 (28) 0x240002b15:0x47:0x0 32131 (28) 0x240002b15:0x4c:0x0 32159 (28) 0x240002b15:0x68:0x0 32171 (28) 0x240002b15:0x74:0x0 32174 (28) 0x240002b15:0x77:0x0 32194 (32) 0x240002b13:0xb7c:0x0 32195 (32) 0x240002b13:0xc63:0x0 32211 (48) 0x240002b13:0x1df6:0x0 32041 (1816) 0x240002b13:0x1e85:0x0 32025 (48) 0x240001b72:0x13:0x0 515464 (48) 0x240002b13:0xf8:0x0 512119 (48) 0x240000404:0x6:0x0 256019 (28) 0x240001b73:0x11:0x0 514539 (32) 0x240001b78:0x5a9a:0x0 32034 (32) 0x240001b78:0x5696:0x0 128028 (28) 0x240001b73:0x49:0x0 512430 (28) 0x240001b73:0x56:0x0 514737 (32) 0x240001b78:0x5b13:0x0 515247 (32) 0x240001b7b:0x806:0x0 515503 (28) 0x240002b13:0xa4:0x0 515058 (28) 0x240001b79:0xb7:0x0 515092 (28) 0x240001b79:0xbb:0x0 224022 (28) 0x240001b73:0xc:0x0 515377 (28) 0x240002b13:0x4c:0x0 512153 (48) 0x240000404:0x7e8:0x0 514544 (32) 0x240001b78:0x5a9f:0x0 320002 (28) 0x240001b73:0x52:0x0 515439 (28) 0x240002b13:0x84:0x0 515259 (28) 0x240002b10:0x18:0x0 514802 (28) 0x240001b79:0x96:0x0 514712 (32) 0x240001b78:0x5afa:0x0 514675 (32) 0x240001b78:0x5ad5:0x0 128012 (32) 0x240000401:0x17c:0x0 3223 (48) 0x240002b13:0xcc:0x0 288011 (28) 0x240001b73:0x2a:0x0 96024 (28) 0x240001b72:0x6:0x0 515516 (48) 0x240002b13:0x10e:0x0 32062 (32) 0x240002b15:0x7:0x0 515314 (28) 0x240002b13:0x17:0x0 515453 (28) 0x240002b13:0x8b:0x0 32080 (32) 0x240002b15:0x19:0x0 514686 (32) 0x240001b78:0x5ae0:0x0 515463 (28) 0x240002b13:0x90:0x0 515512 (48) 0x240002b13:0x10c:0x0 512450 (28) 0x240001b73:0x68:0x0 32083 (32) 0x240002b15:0x1c:0x0 514939 (28) 0x240001b79:0x93:0x0 512467 (32) 0x240001b74:0x1b6a:0x0 32096 (32) 0x240002b15:0x29:0x0 515212 (48) 0x240001b7b:0xd:0x0 32021 (48) 0x240000408:0x235d:0x0 515505 (28) 0x240002b13:0xa5:0x0 512105 (48) 0x240000404:0x1:0x0 96009 (32) 0x240000401:0x172:0x0 192011 (28) 0x2400013a0:0x7:0x0 514723 (32) 0x240001b78:0x5b05:0x0 32099 (32) 0x240002b15:0x2c:0x0 96022 (28) 0x240001b70:0x5:0x0 514553 (32) 0x240001b78:0x5aa8:0x0 515309 (28) 0x240002b10:0x35:0x0 32115 (28) 0x240002b15:0x3c:0x0 32134 (36) 0x240002b15:0x4f:0x0 515402 (28) 0x240002b13:0x65:0x0 256026 (28) 0x240001b73:0x23:0x0 514739 (32) 0x240001b78:0x5b15:0x0 3224 (48) 0x240002b13:0xcd:0x0 515240 (28) 0x240001b7b:0x26:0x0 3237 (48) 0x240002b13:0xda:0x0 514738 (32) 0x240001b78:0x5b14:0x0 32141 (28) 0x240002b15:0x56:0x0 32187 (28) 0x240002b15:0x84:0x0 32220 (2032) 0x240002b13:0x1f9a:0x0 160024 (28) 0x240001b73:0x6:0x0 515429 (28) 0x240002b13:0x7f:0x0 32058 (32) 0x240002b15:0x3:0x0 514780 (48) 0x240001b78:0x5bd4:0x0 515511 (28) 0x240002b13:0xa8:0x0 32063 (32) 0x240002b15:0x8:0x0 514775 (48) 0x240001b78:0x5bd1:0x0 32106 (32) 0x240002b15:0x33:0x0 515178 (28) 0x240001b79:0xcc:0x0 515352 (28) 0x240002b13:0x33:0x0 514727 (32) 0x240001b78:0x5b09:0x0 515264 (28) 0x240002b13:0x8:0x0 514767 (28) 0x240001b79:0x80:0x0 515084 (28) 0x240001b79:0xb9:0x0 515424 (28) 0x240002b13:0x7b:0x0 515532 (48) 0x240002b13:0x116:0x0 515376 (28) 0x240002b13:0x4b:0x0 224021 (28) 0x2400013a2:0x16:0x0 515522 (48) 0x240002b13:0x111:0x0 515320 (28) 0x240002b10:0x39:0x0 512326 (32) 0x240000401:0x194:0x0 515400 (28) 0x240002b13:0x63:0x0 515152 (28) 0x240001b79:0xc7:0x0 515125 (28) 0x240001b79:0xc1:0x0 514816 (28) 0x240001b79:0x75:0x0 514858 (28) 0x240001b79:0x79:0x0 515408 (28) 0x240002b13:0x6b:0x0 32108 (32) 0x240002b15:0x35:0x0 3231 (48) 0x240002b13:0xd4:0x0 515451 (28) 0x240002b13:0x8a:0x0 514677 (32) 0x240001b78:0x5ad7:0x0 515261 (28) 0x240002b13:0x6:0x0 96019 (32) 0x240000401:0x182:0x0 64005 (32) 0x240000401:0x167:0x0 515340 (28) 0x240002b13:0x27:0x0 32119 (32) 0x240002b15:0x40:0x0 288015 (28) 0x240001b73:0x2e:0x0 515393 (28) 0x240002b13:0x5c:0x0 514518 (28) 0x240001b79:0x6:0x0 256005 (28) 0x2400013a0:0x29:0x0 514971 (28) 0x240001b79:0x97:0x0 128029 (28) 0x240001b73:0x5e:0x0 512448 (28) 0x240001b73:0x67:0x0 32120 (28) 0x240002b15:0x41:0x0 32122 (36) 0x240002b15:0x43:0x0 512402 (28) 0x240001b73:0x2f:0x0 514710 (32) 0x240001b78:0x5af8:0x0 32045 (32) 0x240002b13:0x134:0x0 3246 (48) 0x240002b13:0xe3:0x0 192003 (32) 0x240000401:0x190:0x0 514696 (32) 0x240001b78:0x5aea:0x0 512443 (28) 0x240001b73:0x63:0x0 515514 (48) 0x240002b13:0x10d:0x0 256025 (28) 0x240001b73:0x17:0x0 514718 (32) 0x240001b78:0x5b00:0x0 512489 (28) 0x240001b73:0x73:0x0 32049 (32) 0x240002b13:0x725:0x0 514681 (32) 0x240001b78:0x5adb:0x0 514771 (48) 0x240001b78:0x5bcf:0x0 514705 (32) 0x240001b78:0x5af3:0x0 514662 (32) 0x240001b78:0x5ac8:0x0 32128 (28) 0x240002b15:0x49:0x0 32129 (28) 0x240002b15:0x4a:0x0 32151 (28) 0x240002b15:0x60:0x0 32163 (28) 0x240002b15:0x6c:0x0 32175 (28) 0x240002b15:0x78:0x0 32185 (28) 0x240002b15:0x82:0x0 32186 (28) 0x240002b15:0x83:0x0 32225 (1948) 0x240002b13:0x1fb1:0x0 514797 (28) 0x240001b79:0x7d:0x0 515183 (28) 0x240001b79:0xce:0x0 2202 (48) 0x240002b13:0xba:0x0 515097 (28) 0x240001b79:0xbd:0x0 514763 (28) 0x240001b79:0x8a:0x0 515425 (28) 0x240002b13:0x7c:0x0 32057 (32) 0x240002b15:0x2:0x0 288008 (28) 0x240001b73:0x27:0x0 515215 (28) 0x240001b7b:0xe:0x0 32098 (32) 0x240002b15:0x2b:0x0 515431 (28) 0x240002b13:0x80:0x0 32100 (32) 0x240002b15:0x2d:0x0 514743 (32) 0x240001b78:0x5b19:0x0 512141 (48) 0x240000404:0x7e2:0x0 192018 (28) 0x2400013a0:0xe:0x0 32121 (32) 0x240002b15:0x42:0x0 515475 (28) 0x240002b13:0x96:0x0 288019 (28) 0x240001b73:0x3c:0x0 514550 (32) 0x240001b78:0x5aa5:0x0 32135 (32) 0x240002b15:0x50:0x0 515217 (48) 0x240001b7b:0x11:0x0 32172 (32) 0x240002b15:0x75:0x0 288004 (28) 0x240001b73:0x1b:0x0 515339 (28) 0x240002b13:0x26:0x0 515321 (28) 0x240002b10:0x3a:0x0 288027 (28) 0x240001b73:0x4f:0x0 515017 (28) 0x240001b79:0xa6:0x0 32176 (32) 0x240002b15:0x79:0x0 288006 (28) 0x240001b73:0x25:0x0 3244 (48) 0x240002b13:0xe1:0x0 32184 (32) 0x240002b15:0x81:0x0 515254 (28) 0x240002b10:0x15:0x0 32201 (32) 0x240002b10:0x94:0x0 515535 (80) 0x240002b13:0x119:0x0 288010 (28) 0x240001b73:0x29:0x0 515527 (28) 0x240002b13:0xb0:0x0 3209 (48) 0x240002b13:0xbe:0x0 514719 (32) 0x240001b78:0x5b01:0x0 256027 (28) 0x240001b73:0x38:0x0 515337 (28) 0x240002b13:0x24:0x0 64019 (32) 0x240000401:0x181:0x0 160009 (64) 0x240000401:0x187:0x0 514534 (32) 0x240001b78:0x5a95:0x0 514745 (64) 0x240001b78:0x5b1b:0x0 514796 (28) 0x240001b79:0x95:0x0 515248 (28) 0x240002b10:0x4:0x0 515121 (28) 0x240001b79:0xc0:0x0 515389 (28) 0x240002b13:0x58:0x0 64020 (32) 0x240000401:0x18a:0x0 514704 (32) 0x240001b78:0x5af2:0x0 514689 (64) 0x240001b78:0x5ae3:0x0 96014 (32) 0x240000401:0x175:0x0 515525 (92) 0x240002b13:0xaf:0x0 515470 (48) 0x240002b13:0xfb:0x0 515413 (28) 0x240002b13:0x70:0x0 3232 (48) 0x240002b13:0xd5:0x0 64001 (2096) 0x240000401:0x157:0x0 3219 (48) 0x240002b13:0xc8:0x0 515030 (28) 0x240001b79:0xaa:0x0 514668 (32) 0x240001b78:0x5ace:0x0 512161 (32) 0x240000401:0x150:0x0 32180 (28) 0x240002b15:0x7d:0x0 32127 (76) 0x240002b15:0x48:0x0 224004 (28) 0x2400013a0:0x14:0x0 32110 (28) 0x240002b15:0x37:0x0 514720 (32) 0x240001b78:0x5b02:0x0 64009 (32) 0x240000401:0x169:0x0 3225 (48) 0x240002b13:0xce:0x0 514862 (28) 0x240001b79:0x7b:0x0 515378 (28) 0x240002b13:0x4d:0x0 96021 (28) 0x2400013a0:0x2e:0x0 514685 (32) 0x240001b78:0x5adf:0x0 192025 (28) 0x240001b73:0xb:0x0 514661 (32) 0x240001b78:0x5ac7:0x0 514857 (28) 0x240001b79:0x78:0x0 515210 (28) 0x240001b79:0xd6:0x0 514752 (76) 0x240001b79:0x70:0x0 515443 (28) 0x240002b13:0x86:0x0 512417 (28) 0x240001b73:0x43:0x0 515531 (48) 0x240002b13:0x115:0x0 32019 (32) 0x240000401:0x180:0x0 32069 (28) 0x240002b15:0xe:0x0 512486 (28) 0x240001b73:0x70:0x0 32095 (28) 0x240002b15:0x28:0x0 32059 (28) 0x240002b15:0x4:0x0 288020 (28) 0x240001b73:0x3d:0x0 32154 (28) 0x240002b15:0x63:0x0 515521 (28) 0x240002b13:0xad:0x0 512485 (76) 0x240001b73:0x6f:0x0 515465 (28) 0x240002b13:0x91:0x0 515369 (28) 0x240002b13:0x44:0x0 32181 (28) 0x240002b15:0x7e:0x0 515373 (28) 0x240002b13:0x48:0x0 224017 (28) 0x2400013a0:0x21:0x0 32109 (28) 0x240002b15:0x36:0x0 32024 (28) 0x2400013a2:0x24:0x0 514564 (32) 0x240001b78:0x5ab3:0x0 512484 (28) 0x240001b73:0x6e:0x0 32111 (28) 0x240002b15:0x38:0x0 288009 (28) 0x240001b73:0x28:0x0 288018 (28) 0x240001b73:0x3b:0x0 32177 (28) 0x240002b15:0x7a:0x0 514746 (32) 0x240001b78:0x5b1c:0x0 256022 (28) 0x240001b73:0x14:0x0 32043 (32) 0x240002b13:0x12e:0x0 224023 (76) 0x240001b73:0xd:0x0 515490 (48) 0x240002b13:0x103:0x0 514820 (48) 0x240001b78:0x5c0c:0x0 3256 (48) 0x240002b13:0xed:0x0 320004 (28) 0x240001b73:0x54:0x0 96012 (32) 0x240000401:0x174:0x0 515204 (28) 0x240001b7b:0x8:0x0 515452 (48) 0x240002b13:0xf4:0x0 32094 (28) 0x240002b15:0x27:0x0 3254 (48) 0x240002b13:0xeb:0x0 32052 (2076) 0x240002b13:0xb76:0x0 514761 (28) 0x240001b79:0x72:0x0 3227 (48) 0x240002b13:0xd0:0x0 515523 (28) 0x240002b13:0xae:0x0 515241 (80) 0x240001b7b:0x7f7:0x0 514541 (32) 0x240001b78:0x5a9c:0x0 32051 (32) 0x240002b13:0xb11:0x0 514995 (76) 0x240001b79:0xa2:0x0 514542 (32) 0x240001b78:0x5a9d:0x0 32166 (28) 0x240002b15:0x6f:0x0 3243 (48) 0x240002b13:0xe0:0x0 160010 (80) 0x240000401:0x188:0x0 515582 (32) 0x240002b13:0x972:0x0 32065 (28) 0x240002b15:0xa:0x0 96029 (28) 0x240001b73:0x5d:0x0 515189 (28) 0x240001b79:0xd1:0x0 514676 (32) 0x240001b78:0x5ad6:0x0 515536 (48) 0x240002b13:0x11a:0x0 512466 (32) 0x240001b74:0x1b65:0x0 515207 (28) 0x240001b7b:0xa:0x0 514991 (28) 0x240001b79:0xa1:0x0 3221 (48) 0x240002b13:0xca:0x0 515195 (48) 0x240001b7b:0x1:0x0 515473 (28) 0x240002b13:0x95:0x0 32089 (28) 0x240002b15:0x22:0x0 160025 (28) 0x240001b73:0x7:0x0 160003 (32) 0x240000401:0x184:0x0 514728 (32) 0x240001b78:0x5b0a:0x0 514574 (32) 0x240001b78:0x5abd:0x0 32035 (32) 0x240001b78:0x5697:0x0 515506 (48) 0x240002b13:0x109:0x0 32016 (32) 0x240000401:0x165:0x0 192029 (28) 0x240001b73:0x60:0x0 32026 (28) 0x240001b73:0x1c:0x0 3210 (48) 0x240002b13:0xbf:0x0 224011 (28) 0x2400013a0:0x1b:0x0 514703 (32) 0x240001b78:0x5af1:0x0 192005 (32) 0x240000401:0x191:0x0 515344 (28) 0x240002b13:0x2b:0x0 3255 (48) 0x240002b13:0xec:0x0 32147 (28) 0x240002b15:0x5c:0x0 32155 (28) 0x240002b15:0x64:0x0 32158 (28) 0x240002b15:0x67:0x0 288014 (28) 0x240001b73:0x2d:0x0 515553 (28) 0x240002b10:0x92:0x0 514680 (80) 0x240001b78:0x5ada:0x0 512232 (32) 0x240000401:0x164:0x0 515268 (28) 0x240002b10:0x1a:0x0 160022 (28) 0x240001b73:0x4:0x0 32190 (28) 0x240002b15:0x87:0x0 192015 (28) 0x2400013a0:0xb:0x0 515040 (28) 0x240001b79:0xae:0x0 3259 (48) 0x240002b13:0xf0:0x0 514706 (32) 0x240001b78:0x5af4:0x0 96028 (28) 0x240001b73:0x48:0x0 514774 (28) 0x240001b79:0x8d:0x0 515454 (2112) 0x240002b13:0xf5:0x0 224002 (28) 0x2400013a0:0x12:0x0 515497 (28) 0x240002b13:0xa1:0x0 32198 (48) 0x240002b13:0xc69:0x0 512207 (48) 0x240000406:0x1ccd:0x0 515353 (28) 0x240002b13:0x34:0x0 515501 (28) 0x240002b13:0xa3:0x0 512216 (32) 0x240000401:0x160:0x0 512138 (28) 0x240000403:0x1:0x0 515420 (28) 0x240002b13:0x77:0x0 515313 (28) 0x240002b10:0x38:0x0 515052 (28) 0x240001b79:0xb4:0x0 32086 (28) 0x240002b15:0x1f:0x0 288023 (28) 0x240001b73:0x40:0x0 224003 (28) 0x2400013a0:0x13:0x0 32200 (48) 0x240002b13:0xd62:0x0 514750 (32) 0x240001b78:0x5b20:0x0 512445 (28) 0x240001b73:0x65:0x0 514546 (80) 0x240001b78:0x5aa1:0x0 515032 (28) 0x240001b79:0xac:0x0 512126 (48) 0x240000404:0x8:0x0 515198 (28) 0x240001b79:0xd4:0x0 32006 (96) 0x240000405:0x31:0x0 288016 (76) 0x240001b73:0x39:0x0 514658 (32) 0x240001b78:0x5ac4:0x0 3248 (48) 0x240002b13:0xe5:0x0 3229 (48) 0x240002b13:0xd2:0x0 224025 (28) 0x240001b73:0xf:0x0 32092 (28) 0x240002b15:0x25:0x0 515533 (48) 0x240002b13:0x117:0x0 515375 (28) 0x240002b13:0x4a:0x0 515449 (76) 0x240002b13:0x89:0x0 512220 (80) 0x240000401:0x161:0x0 128017 (32) 0x240000401:0x17f:0x0 192022 (28) 0x240001b73:0x8:0x0 188 (28) 0x2400013a2:0x13:0x0 515404 (28) 0x240002b13:0x67:0x0 514565 (32) 0x240001b78:0x5ab4:0x0 256018 (28) 0x240001b73:0x10:0x0 512212 (48) 0x240000406:0x1ccf:0x0 514847 (28) 0x240001b79:0x77:0x0 514878 (28) 0x240001b79:0x83:0x0 512452 (28) 0x240001b73:0x6a:0x0 512132 (96) 0x240000404:0xa:0x0 32143 (28) 0x240002b15:0x58:0x0 3215 (96) 0x240002b13:0xc4:0x0 224019 (28) 0x2400013a0:0x23:0x0 3245 (48) 0x240002b13:0xe2:0x0 515399 (28) 0x240002b13:0x62:0x0 512211 (48) 0x240000406:0x1cce:0x0 288002 (28) 0x240001b73:0x19:0x0 32207 (2072) 0x240002b13:0x13d9:0x0 oleg146-server: debugfs 1.46.2.wc5 (26-Mar-2022) oleg146-server: /dev/mapper/mds1_flakey: catastrophic mode - not reading inode or group bitmaps 512004 (12) . 2 (4084) .. 512114 (32) 0x200000401:0x148:0x0 512161 (32) 0x200000401:0x152:0x0 160001 (32) 0x200000401:0x186:0x0 160006 (32) 0x200000401:0x188:0x0 32034 (48) 0x2000013a1:0x64c:0x0 32053 (48) 0x2000013a1:0x663:0x0 32055 (48) 0x2000013a1:0x665:0x0 64039 (48) 0x2000013a1:0x67a:0x0 96029 (32) 0x2000013a2:0x8b6:0x0 32063 (28) 0x200002341:0x3:0x0 260 (48) 0x200002342:0x7f70:0x0 32071 (48) 0x200002342:0x7f8c:0x0 32077 (48) 0x200002342:0x7f92:0x0 32089 (48) 0x200002342:0x7fa2:0x0 32094 (48) 0x200002342:0x7fa8:0x0 32098 (48) 0x200002342:0x7fac:0x0 64069 (48) 0x200002342:0x7ff2:0x0 64100 (48) 0x200002342:0x801c:0x0 64109 (48) 0x200002342:0x802c:0x0 64135 (48) 0x200002342:0x8052:0x0 64145 (28) 0x200002341:0x7:0x0 32158 (32) 0x200002b11:0x774c:0x0 518365 (32) 0x200003ab1:0x33f4:0x0 518496 (32) 0x200003ab1:0x3466:0x0 518437 (32) 0x200003ab1:0x343c:0x0 518396 (32) 0x200003ab1:0x3413:0x0 518532 (32) 0x200003ab1:0x3478:0x0 518370 (32) 0x200003ab1:0x33f9:0x0 518466 (32) 0x200003ab1:0x3457:0x0 518443 (32) 0x200003ab1:0x3442:0x0 518367 (32) 0x200003ab1:0x33f6:0x0 32196 (28) 0x200002341:0xcb:0x0 32221 (28) 0x200002341:0xd6:0x0 32230 (28) 0x200002341:0xdb:0x0 32232 (28) 0x200002341:0xdc:0x0 32233 (28) 0x200002341:0xdd:0x0 32239 (28) 0x200002341:0xe1:0x0 32241 (28) 0x200002341:0xe2:0x0 64150 (28) 0x200002341:0xe4:0x0 64156 (28) 0x200002341:0xe9:0x0 32245 (28) 0x200002341:0xfd:0x0 64181 (32) 0x200002341:0x100:0x0 64197 (32) 0x200002341:0x108:0x0 64221 (32) 0x200002341:0x115:0x0 64240 (32) 0x200002341:0x120:0x0 96059 (28) 0x200004283:0x4d:0x0 96098 (28) 0x200005222:0x22:0x0 96099 (28) 0x200005222:0x23:0x0 32267 (32) 0x200005222:0x192:0x0 32285 (32) 0x200005224:0x1f70:0x0 32309 (32) 0x200005224:0x1f88:0x0 32317 (32) 0x200005224:0x1f90:0x0 32323 (32) 0x200005224:0x1f96:0x0 32333 (32) 0x200005224:0x1fa0:0x0 32340 (32) 0x200005224:0x1fa7:0x0 32342 (32) 0x200005224:0x1fa9:0x0 32356 (32) 0x200005224:0x1fb7:0x0 32367 (32) 0x200005224:0x1fc2:0x0 32373 (32) 0x200005222:0x194:0x0 64262 (32) 0x200005222:0x195:0x0 64267 (32) 0x200005222:0x19d:0x0 64272 (1980) 0x20000522a:0x1:0x0 32032 (48) 0x2000013a1:0x64a:0x0 64056 (48) 0x2000013a1:0x68d:0x0 64008 (32) 0x200000401:0x16a:0x0 96007 (32) 0x200000401:0x173:0x0 64061 (32) 0x2000013a2:0x8b1:0x0 512253 (32) 0x200000401:0x163:0x0 64014 (32) 0x200000401:0x16d:0x0 512129 (32) 0x200000401:0x14c:0x0 64021 (32) 0x200000401:0x18a:0x0 64012 (32) 0x200000401:0x16c:0x0 512158 (32) 0x200000401:0x151:0x0 32130 (48) 0x200002342:0x7fd6:0x0 32132 (48) 0x200002342:0x7fd8:0x0 32136 (48) 0x200002342:0x7fdc:0x0 64066 (48) 0x200002342:0x7fee:0x0 64075 (48) 0x200002342:0x7ffa:0x0 64093 (48) 0x200002342:0x8012:0x0 32151 (48) 0x200002342:0x8038:0x0 64133 (48) 0x200002342:0x8050:0x0 32159 (28) 0x200002341:0x9:0x0 518357 (48) 0x200003ab1:0x2fff:0x0 518447 (32) 0x200003ab1:0x3446:0x0 518361 (32) 0x200003ab1:0x33f2:0x0 518369 (32) 0x200003ab1:0x33f8:0x0 518476 (32) 0x200003ab1:0x345c:0x0 518459 (32) 0x200003ab1:0x3452:0x0 518454 (32) 0x200003ab1:0x344d:0x0 518550 (32) 0x200003ab1:0x3481:0x0 518379 (32) 0x200003ab1:0x3402:0x0 518399 (32) 0x200003ab1:0x3416:0x0 518457 (32) 0x200003ab1:0x3450:0x0 518430 (32) 0x200003ab1:0x3435:0x0 32174 (28) 0x200002341:0xbe:0x0 32176 (28) 0x200002341:0xc0:0x0 32181 (28) 0x200002341:0xc3:0x0 32192 (28) 0x200002341:0xc9:0x0 32205 (28) 0x200002341:0xcf:0x0 32226 (28) 0x200002341:0xd8:0x0 32229 (28) 0x200002341:0xda:0x0 64154 (28) 0x200002341:0xe7:0x0 64175 (28) 0x200002341:0xf5:0x0 64176 (28) 0x200002341:0xf6:0x0 64180 (28) 0x200002341:0xff:0x0 64189 (32) 0x200002341:0x106:0x0 64201 (32) 0x200002341:0x10a:0x0 64208 (32) 0x200002341:0x10b:0x0 64225 (32) 0x200002341:0x116:0x0 64227 (32) 0x200002341:0x117:0x0 64235 (32) 0x200002341:0x11d:0x0 64250 (32) 0x200002341:0x131:0x0 32255 (48) 0x200005221:0xa5c:0x0 32262 (28) 0x200005222:0xd6:0x0 32259 (28) 0x200005222:0xad:0x0 32263 (28) 0x200005224:0xcb:0x0 32282 (32) 0x200005224:0x1f6d:0x0 32283 (32) 0x200005224:0x1f6e:0x0 32284 (32) 0x200005224:0x1f6f:0x0 32286 (32) 0x200005224:0x1f71:0x0 32288 (32) 0x200005224:0x1f73:0x0 32291 (32) 0x200005224:0x1f76:0x0 32301 (32) 0x200005224:0x1f80:0x0 32310 (32) 0x200005224:0x1f89:0x0 32312 (32) 0x200005224:0x1f8b:0x0 32313 (32) 0x200005224:0x1f8c:0x0 32319 (32) 0x200005224:0x1f92:0x0 32335 (32) 0x200005224:0x1fa2:0x0 32343 (32) 0x200005224:0x1faa:0x0 32349 (32) 0x200005224:0x1fb0:0x0 32351 (32) 0x200005224:0x1fb2:0x0 32353 (32) 0x200005224:0x1fb4:0x0 32354 (32) 0x200005224:0x1fb5:0x0 32359 (32) 0x200005224:0x1fba:0x0 32363 (32) 0x200005224:0x1fbe:0x0 32371 (1628) 0x200005222:0x193:0x0 32126 (48) 0x200002342:0x7fd2:0x0 518560 (32) 0x200003ab1:0x3486:0x0 64084 (48) 0x200002342:0x8006:0x0 32069 (32) 0x200002342:0x7f89:0x0 518504 (32) 0x200003ab1:0x346a:0x0 518433 (32) 0x200003ab1:0x3438:0x0 32014 (32) 0x200000401:0x167:0x0 518366 (32) 0x200003ab1:0x33f5:0x0 64080 (48) 0x200002342:0x8000:0x0 96027 (32) 0x2000013a2:0x8ac:0x0 512127 (32) 0x200000401:0x14b:0x0 32041 (48) 0x2000013a1:0x654:0x0 518385 (32) 0x200003ab1:0x3408:0x0 32020 (32) 0x200000401:0x17e:0x0 518474 (32) 0x200003ab1:0x345b:0x0 518364 (32) 0x200003ab1:0x33f3:0x0 518421 (32) 0x200003ab1:0x342c:0x0 518482 (32) 0x200003ab1:0x345f:0x0 64010 (32) 0x200000401:0x16b:0x0 515995 (48) 0x200002342:0x7f6c:0x0 512111 (32) 0x200000401:0x147:0x0 32106 (48) 0x200002342:0x7fb6:0x0 160015 (32) 0x200000401:0x191:0x0 32081 (48) 0x200002342:0x7f96:0x0 32116 (48) 0x200002342:0x7fc2:0x0 64064 (48) 0x200002342:0x7fec:0x0 512164 (32) 0x200000401:0x153:0x0 128002 (32) 0x200000401:0x17b:0x0 32016 (32) 0x200000401:0x168:0x0 96005 (32) 0x200000401:0x172:0x0 518422 (32) 0x200003ab1:0x342d:0x0 518402 (32) 0x200003ab1:0x3419:0x0 518423 (32) 0x200003ab1:0x342e:0x0 518428 (32) 0x200003ab1:0x3433:0x0 32199 (28) 0x200002341:0xcc:0x0 32207 (28) 0x200002341:0xd0:0x0 32214 (28) 0x200002341:0xd1:0x0 32218 (28) 0x200002341:0xd4:0x0 64184 (32) 0x200002341:0x102:0x0 64199 (32) 0x200002341:0x109:0x0 64210 (32) 0x200002341:0x10c:0x0 64211 (32) 0x200002341:0x10d:0x0 64212 (32) 0x200002341:0x10e:0x0 64215 (32) 0x200002341:0x111:0x0 64233 (32) 0x200002341:0x11b:0x0 96053 (32) 0x200002341:0x12f:0x0 64256 (48) 0x200005221:0xa82:0x0 96084 (28) 0x200005222:0x21:0x0 96102 (28) 0x200005222:0x24:0x0 32272 (32) 0x200005224:0x1f63:0x0 32275 (32) 0x200005224:0x1f66:0x0 32276 (32) 0x200005224:0x1f67:0x0 32294 (32) 0x200005224:0x1f79:0x0 32297 (32) 0x200005224:0x1f7c:0x0 32299 (32) 0x200005224:0x1f7e:0x0 32314 (32) 0x200005224:0x1f8d:0x0 32329 (32) 0x200005224:0x1f9c:0x0 32334 (32) 0x200005224:0x1fa1:0x0 32336 (2104) 0x200005224:0x1fa3:0x0 64016 (32) 0x200000401:0x16e:0x0 518435 (32) 0x200003ab1:0x343a:0x0 64048 (48) 0x2000013a1:0x685:0x0 32039 (48) 0x2000013a1:0x652:0x0 512141 (32) 0x200000401:0x150:0x0 512258 (32) 0x200000401:0x165:0x0 128021 (32) 0x200000401:0x18b:0x0 64137 (32) 0x200002b11:0x69a7:0x0 32075 (48) 0x200002342:0x7f90:0x0 96020 (32) 0x200000401:0x17f:0x0 160011 (32) 0x200000401:0x18d:0x0 518470 (32) 0x200003ab1:0x3459:0x0 518518 (32) 0x200003ab1:0x3471:0x0 64096 (48) 0x200002342:0x8016:0x0 64068 (48) 0x200002342:0x7ff0:0x0 64045 (48) 0x2000013a1:0x683:0x0 32064 (48) 0x200002342:0x7f7f:0x0 518432 (32) 0x200003ab1:0x3437:0x0 96003 (32) 0x200000401:0x171:0x0 518419 (32) 0x200003ab1:0x342a:0x0 64078 (48) 0x200002342:0x7ffe:0x0 32154 (28) 0x200002341:0x5:0x0 518462 (32) 0x200003ab1:0x3455:0x0 518546 (32) 0x200003ab1:0x347f:0x0 518425 (32) 0x200003ab1:0x3430:0x0 518424 (32) 0x200003ab1:0x342f:0x0 518405 (32) 0x200003ab1:0x341c:0x0 518408 (32) 0x200003ab1:0x341f:0x0 32183 (28) 0x200002341:0xc4:0x0 32172 (28) 0x200002341:0xbb:0x0 32184 (28) 0x200002341:0xc5:0x0 32203 (28) 0x200002341:0xcd:0x0 32216 (28) 0x200002341:0xd2:0x0 32217 (28) 0x200002341:0xd3:0x0 64153 (28) 0x200002341:0xe6:0x0 64155 (28) 0x200002341:0xe8:0x0 64159 (28) 0x200002341:0xeb:0x0 64163 (28) 0x200002341:0xed:0x0 64165 (28) 0x200002341:0xef:0x0 32242 (28) 0x200002341:0xf1:0x0 32247 (32) 0x200002341:0x105:0x0 64217 (32) 0x200002341:0x113:0x0 96043 (32) 0x200002341:0x12b:0x0 64253 (48) 0x200005221:0xa80:0x0 96078 (48) 0x200005224:0x1c:0x0 96081 (28) 0x200005222:0x1e:0x0 96083 (28) 0x200005222:0x20:0x0 32270 (32) 0x200005224:0x1f1b:0x0 32281 (32) 0x200005224:0x1f6c:0x0 96142 (28) 0x200005224:0x48:0x0 32292 (32) 0x200005224:0x1f77:0x0 32305 (32) 0x200005224:0x1f84:0x0 32315 (32) 0x200005224:0x1f8e:0x0 32318 (32) 0x200005224:0x1f91:0x0 32332 (32) 0x200005224:0x1f9f:0x0 32337 (32) 0x200005224:0x1fa4:0x0 32338 (32) 0x200005224:0x1fa5:0x0 32339 (32) 0x200005224:0x1fa6:0x0 32345 (32) 0x200005224:0x1fac:0x0 32347 (32) 0x200005224:0x1fae:0x0 32368 (32) 0x200005224:0x1fc3:0x0 32369 (32) 0x200005224:0x1fc4:0x0 64264 (32) 0x200005222:0x198:0x0 64265 (1984) 0x200005222:0x199:0x0 96130 (28) 0x200005224:0x3c:0x0 32193 (28) 0x200002341:0xca:0x0 128019 (32) 0x200000401:0x185:0x0 32258 (28) 0x200005222:0x8a:0x0 96055 (28) 0x200004283:0x1b:0x0 512131 (32) 0x200000401:0x14d:0x0 64173 (28) 0x200002341:0xf3:0x0 32249 (32) 0x200002341:0x12c:0x0 96068 (48) 0x200005224:0xc:0x0 64026 (48) 0x2000013a1:0x668:0x0 96042 (32) 0x200002341:0x12a:0x0 64127 (48) 0x200002342:0x8048:0x0 32175 (28) 0x200002341:0xbf:0x0 64094 (48) 0x200002342:0x8014:0x0 64152 (28) 0x200002341:0xe5:0x0 96048 (32) 0x200002341:0x12e:0x0 64182 (32) 0x200002341:0x101:0x0 64241 (32) 0x200002341:0x121:0x0 64119 (48) 0x200002342:0x803c:0x0 518516 (32) 0x200003ab1:0x3470:0x0 96075 (28) 0x200005224:0x13:0x0 96080 (28) 0x200005222:0x1d:0x0 32173 (28) 0x200002341:0xbd:0x0 32250 (32) 0x200002341:0x130:0x0 518400 (32) 0x200003ab1:0x3417:0x0 64216 (32) 0x200002341:0x112:0x0 96040 (32) 0x200002341:0x129:0x0 512133 (32) 0x200000401:0x14e:0x0 64174 (28) 0x200002341:0xf4:0x0 64106 (48) 0x200002342:0x8026:0x0 32268 (48) 0x200005224:0x862:0x0 96038 (32) 0x200002341:0x128:0x0 64017 (32) 0x200000401:0x16f:0x0 64030 (48) 0x2000013a1:0x66e:0x0 64088 (48) 0x200002342:0x800a:0x0 512262 (32) 0x200000401:0x166:0x0 32273 (48) 0x200005224:0x1f64:0x0 64172 (28) 0x200002341:0xf2:0x0 64024 (48) 0x2000013a1:0x667:0x0 32104 (48) 0x200002342:0x7fb4:0x0 518383 (32) 0x200003ab1:0x3406:0x0 64232 (32) 0x200002341:0x11a:0x0 518558 (32) 0x200003ab1:0x3485:0x0 518494 (32) 0x200003ab1:0x3465:0x0 32156 (32) 0x200002b11:0x699b:0x0 64073 (48) 0x200002342:0x7ff8:0x0 96070 (28) 0x200005222:0x17:0x0 32019 (32) 0x200000401:0x177:0x0 160008 (32) 0x200000401:0x189:0x0 32274 (48) 0x200005224:0x1f65:0x0 32143 (48) 0x200002342:0x7fe6:0x0 32167 (28) 0x200002341:0xba:0x0 64059 (32) 0x2000013a2:0x8a6:0x0 32243 (28) 0x200002341:0xfc:0x0 64242 (32) 0x200002341:0x122:0x0 96082 (28) 0x200005222:0x1f:0x0 64050 (48) 0x2000013a1:0x687:0x0 32277 (32) 0x200005224:0x1f68:0x0 32289 (32) 0x200005224:0x1f74:0x0 32293 (32) 0x200005224:0x1f78:0x0 32298 (32) 0x200005224:0x1f7d:0x0 32302 (32) 0x200005224:0x1f81:0x0 32308 (32) 0x200005224:0x1f87:0x0 32311 (32) 0x200005224:0x1f8a:0x0 32316 (32) 0x200005224:0x1f8f:0x0 32325 (32) 0x200005224:0x1f98:0x0 32326 (32) 0x200005224:0x1f99:0x0 32328 (32) 0x200005224:0x1f9b:0x0 32350 (32) 0x200005224:0x1fb1:0x0 32357 (32) 0x200005224:0x1fb8:0x0 32358 (32) 0x200005224:0x1fb9:0x0 64266 (32) 0x200005222:0x19a:0x0 64268 (32) 0x200005222:0x19f:0x0 64270 (32) 0x200005222:0x1a2:0x0 64271 (1536) 0x200005222:0x1a3:0x0 32022 (32) 0x200000401:0x18e:0x0 32048 (48) 0x2000013a1:0x65d:0x0 96031 (32) 0x200002342:0x75a0:0x0 518520 (32) 0x200003ab1:0x3472:0x0 64244 (60) 0x200002341:0x123:0x0 518522 (32) 0x200003ab1:0x3473:0x0 160003 (32) 0x200000401:0x187:0x0 32155 (32) 0x200002b11:0x6997:0x0 64220 (32) 0x200002341:0x114:0x0 518464 (32) 0x200003ab1:0x3456:0x0 32254 (32) 0x200005221:0xa23:0x0 518378 (32) 0x200003ab1:0x3401:0x0 64149 (28) 0x200002341:0xe3:0x0 32190 (28) 0x200002341:0xc7:0x0 32083 (48) 0x200002342:0x7f98:0x0 96060 (28) 0x200004283:0x4f:0x0 518387 (32) 0x200003ab1:0x340a:0x0 32279 (32) 0x200005224:0x1f6a:0x0 64259 (28) 0x200005222:0xd8:0x0 160010 (32) 0x200000401:0x18c:0x0 64052 (48) 0x2000013a1:0x689:0x0 32220 (28) 0x200002341:0xd5:0x0 32128 (48) 0x200002342:0x7fd4:0x0 32280 (32) 0x200005224:0x1f6b:0x0 32145 (48) 0x200002342:0x7fe8:0x0 96010 (32) 0x200000401:0x174:0x0 64237 (32) 0x200002341:0x11e:0x0 32096 (48) 0x200002342:0x7faa:0x0 96022 (32) 0x200000401:0x18f:0x0 518450 (32) 0x200003ab1:0x3449:0x0 96036 (32) 0x200002341:0x127:0x0 64228 (32) 0x200002341:0x118:0x0 160013 (32) 0x200000401:0x190:0x0 32028 (32) 0x200000401:0x192:0x0 518468 (32) 0x200003ab1:0x3458:0x0 518427 (32) 0x200003ab1:0x3432:0x0 518524 (32) 0x200003ab1:0x3474:0x0 32166 (28) 0x200002341:0xb9:0x0 32010 (32) 0x200000401:0x157:0x0 32238 (28) 0x200002341:0xe0:0x0 32290 (32) 0x200005224:0x1f75:0x0 512249 (32) 0x200000401:0x162:0x0 96111 (28) 0x200005224:0x29:0x0 32112 (48) 0x200002342:0x7fbe:0x0 64054 (48) 0x2000013a1:0x68b:0x0 512139 (32) 0x200000401:0x14f:0x0 64247 (32) 0x200002341:0x124:0x0 32046 (48) 0x2000013a1:0x65b:0x0 518461 (32) 0x200003ab1:0x3454:0x0 518540 (32) 0x200003ab1:0x347c:0x0 518394 (32) 0x200003ab1:0x3411:0x0 32236 (28) 0x200002341:0xdf:0x0 518374 (32) 0x200003ab1:0x33fd:0x0 64187 (32) 0x200002341:0x104:0x0 32296 (32) 0x200005224:0x1f7b:0x0 512120 (32) 0x200000401:0x149:0x0 64185 (32) 0x200002341:0x103:0x0 96132 (28) 0x200005224:0x3e:0x0 64164 (28) 0x200002341:0xee:0x0 32300 (32) 0x200005224:0x1f7f:0x0 32303 (32) 0x200005224:0x1f82:0x0 32306 (32) 0x200005224:0x1f85:0x0 32320 (32) 0x200005224:0x1f93:0x0 32321 (32) 0x200005224:0x1f94:0x0 32324 (32) 0x200005224:0x1f97:0x0 32327 (32) 0x200005224:0x1f9a:0x0 32331 (32) 0x200005224:0x1f9e:0x0 32344 (32) 0x200005224:0x1fab:0x0 32348 (32) 0x200005224:0x1faf:0x0 32352 (32) 0x200005224:0x1fb3:0x0 32361 (32) 0x200005224:0x1fbc:0x0 32370 (1696) 0x200005224:0x1fc5:0x0 96025 (32) 0x2000013a2:0x8aa:0x0 96016 (32) 0x200000401:0x178:0x0 96032 (32) 0x200002341:0x125:0x0 64239 (32) 0x200002341:0x11f:0x0 32269 (32) 0x200005224:0x1e5e:0x0 96056 (28) 0x200004283:0x1c:0x0 64258 (28) 0x200005222:0xd1:0x0 518486 (32) 0x200003ab1:0x3461:0x0 32304 (32) 0x200005224:0x1f83:0x0 96121 (28) 0x200005224:0x33:0x0 32264 (28) 0x200005224:0xcd:0x0 518556 (32) 0x200003ab1:0x3484:0x0 762 (48) 0x200005224:0xce:0x0 128010 (32) 0x200000401:0x180:0x0 32261 (28) 0x200005222:0xcb:0x0 518434 (32) 0x200003ab1:0x3439:0x0 32330 (32) 0x200005224:0x1f9d:0x0 96090 (48) 0x200005224:0x1e:0x0 32341 (32) 0x200005224:0x1fa8:0x0 96064 (48) 0x200005223:0x1:0x0 518389 (32) 0x200003ab1:0x340c:0x0 32134 (48) 0x200002342:0x7fda:0x0 32102 (48) 0x200002342:0x7fb2:0x0 32346 (32) 0x200005224:0x1fad:0x0 512211 (32) 0x200000401:0x15a:0x0 128006 (32) 0x200000401:0x17d:0x0 518413 (32) 0x200003ab1:0x3424:0x0 64231 (32) 0x200002341:0x119:0x0 32228 (28) 0x200002341:0xd9:0x0 32118 (48) 0x200002342:0x7fc4:0x0 96072 (28) 0x200005222:0x19:0x0 32355 (32) 0x200005224:0x1fb6:0x0 96019 (32) 0x200000401:0x17a:0x0 32188 (28) 0x200002341:0xc6:0x0 96001 (32) 0x200000401:0x170:0x0 96045 (32) 0x200002341:0x12d:0x0 518390 (32) 0x200003ab1:0x340d:0x0 32260 (28) 0x200005222:0xbe:0x0 32362 (32) 0x200005224:0x1fbd:0x0 32222 (28) 0x200002341:0xd7:0x0 96061 (28) 0x200004283:0x52:0x0 96103 (28) 0x200005222:0x25:0x0 32153 (28) 0x200002341:0x4:0x0 518530 (32) 0x200003ab1:0x3477:0x0 96024 (32) 0x2000013a2:0x8a8:0x0 32079 (48) 0x200002342:0x7f94:0x0 64033 (48) 0x2000013a1:0x672:0x0 32050 (48) 0x2000013a1:0x65f:0x0 32204 (28) 0x200002341:0xce:0x0 32058 (48) 0x2000013a2:0x89e:0x0 32114 (48) 0x200002342:0x7fc0:0x0 32191 (28) 0x200002341:0xc8:0x0 64195 (32) 0x200002341:0x107:0x0 32179 (28) 0x200002341:0xc2:0x0 32365 (32) 0x200005224:0x1fc0:0x0 96017 (2224) 0x200000401:0x179:0x0 64124 (48) 0x200002342:0x8044:0x0 96085 (48) 0x200005224:0x1d:0x0 32271 (32) 0x200005224:0x1f62:0x0 518377 (32) 0x200003ab1:0x3400:0x0 32091 (48) 0x200002342:0x7fa4:0x0 32148 (48) 0x200002342:0x800e:0x0 512257 (32) 0x200000401:0x164:0x0 512236 (32) 0x200000401:0x15f:0x0 64251 (48) 0x200005221:0xa7a:0x0 64166 (28) 0x200002341:0xf0:0x0 96138 (28) 0x200005224:0x44:0x0 518538 (32) 0x200003ab1:0x347b:0x0 512245 (32) 0x200000401:0x161:0x0 128014 (32) 0x200000401:0x182:0x0 518498 (32) 0x200003ab1:0x3467:0x0 64260 (28) 0x200005222:0xdb:0x0 518429 (32) 0x200003ab1:0x3434:0x0 64086 (48) 0x200002342:0x8008:0x0 32138 (48) 0x200002342:0x7fde:0x0 64062 (32) 0x200002342:0x759e:0x0 64177 (28) 0x200002341:0xfe:0x0 518506 (32) 0x200003ab1:0x346b:0x0 32257 (28) 0x200005224:0x53:0x0 128017 (32) 0x200000401:0x184:0x0 32108 (48) 0x200002342:0x7fb8:0x0 32036 (48) 0x2000013a1:0x64e:0x0 32278 (32) 0x200005224:0x1f69:0x0 518384 (32) 0x200003ab1:0x3407:0x0 96119 (28) 0x200005224:0x31:0x0 64257 (28) 0x200005222:0x26:0x0 32287 (32) 0x200005224:0x1f72:0x0 64158 (28) 0x200002341:0xea:0x0 32295 (32) 0x200005224:0x1f7a:0x0 128015 (32) 0x200000401:0x183:0x0 64214 (32) 0x200002341:0x110:0x0 512123 (32) 0x200000401:0x14a:0x0 96035 (32) 0x200002341:0x126:0x0 96012 (32) 0x200000401:0x175:0x0 32073 (48) 0x200002342:0x7f8e:0x0 64131 (48) 0x200002342:0x804e:0x0 64117 (48) 0x200002342:0x803a:0x0 32265 (32) 0x200005222:0x191:0x0 518508 (32) 0x200003ab1:0x346c:0x0 64234 (32) 0x200002341:0x11c:0x0 64058 (32) 0x2000013a2:0x8a4:0x0 32178 (28) 0x200002341:0xc1:0x0 96071 (28) 0x200005222:0x18:0x0 96014 (32) 0x200000401:0x176:0x0 32018 (32) 0x200000401:0x169:0x0 64213 (32) 0x200002341:0x10f:0x0 64114 (48) 0x200002342:0x8034:0x0 64160 (28) 0x200002341:0xec:0x0 512241 (32) 0x200000401:0x160:0x0 128004 (32) 0x200000401:0x17c:0x0 128012 (32) 0x200000401:0x181:0x0 32234 (28) 0x200002341:0xde:0x0 32307 (32) 0x200005224:0x1f86:0x0 32322 (32) 0x200005224:0x1f95:0x0 32360 (32) 0x200005224:0x1fbb:0x0 32364 (32) 0x200005224:0x1fbf:0x0 32366 (32) 0x200005224:0x1fc1:0x0 32372 (32) 0x200005224:0x1fc7:0x0 64269 (1952) 0x200005222:0x1a0:0x0 oleg146-server: debugfs 1.46.2.wc5 (26-Mar-2022) oleg146-server: /dev/mapper/mds2_flakey: catastrophic mode - not reading inode or group bitmaps 512004 (12) . 2 (4084) .. 512145 (32) 0x240000404:0x7e4:0x0 512206 (32) 0x240000406:0x1ccc:0x0 64010 (32) 0x240000401:0x16a:0x0 96005 (32) 0x240000401:0x170:0x0 128003 (32) 0x240000401:0x178:0x0 128010 (32) 0x240000401:0x17b:0x0 128014 (32) 0x240000401:0x17d:0x0 160001 (32) 0x240000401:0x183:0x0 192017 (28) 0x2400013a0:0xd:0x0 256004 (28) 0x2400013a0:0x28:0x0 128021 (28) 0x2400013a0:0x2f:0x0 192021 (28) 0x2400013a2:0x14:0x0 256011 (28) 0x2400013a2:0x1a:0x0 256014 (28) 0x2400013a2:0x1d:0x0 256016 (28) 0x2400013a4:0x5:0x0 288007 (28) 0x240001b73:0x26:0x0 128027 (28) 0x240001b73:0x34:0x0 192028 (28) 0x240001b73:0x4b:0x0 32029 (28) 0x240001b73:0x5b:0x0 160029 (28) 0x240001b73:0x5f:0x0 512447 (28) 0x240001b73:0x66:0x0 96030 (48) 0x240001b72:0x1e:0x0 514519 (48) 0x240001b78:0x5a87:0x0 514530 (32) 0x240001b78:0x5a91:0x0 514531 (32) 0x240001b78:0x5a92:0x0 514545 (32) 0x240001b78:0x5aa0:0x0 514561 (32) 0x240001b78:0x5ab0:0x0 514731 (32) 0x240001b78:0x5b0d:0x0 514575 (28) 0x240001b79:0x7:0x0 514671 (32) 0x240001b78:0x5ad1:0x0 514714 (32) 0x240001b78:0x5afc:0x0 514732 (32) 0x240001b78:0x5b0e:0x0 514726 (32) 0x240001b78:0x5b08:0x0 514665 (32) 0x240001b78:0x5acb:0x0 514973 (28) 0x240001b79:0x9b:0x0 515000 (28) 0x240001b79:0xa4:0x0 515182 (28) 0x240001b79:0xcd:0x0 515202 (28) 0x240001b7b:0x7:0x0 515224 (28) 0x240001b7b:0x15:0x0 515262 (28) 0x240002b10:0x21:0x0 515422 (28) 0x240002b13:0x79:0x0 515331 (28) 0x240002b13:0x1e:0x0 515343 (28) 0x240002b13:0x2a:0x0 515489 (28) 0x240002b13:0x9d:0x0 515354 (28) 0x240002b13:0x35:0x0 515335 (28) 0x240002b13:0x22:0x0 515441 (28) 0x240002b13:0x85:0x0 515384 (28) 0x240002b13:0x53:0x0 515471 (28) 0x240002b13:0x94:0x0 515371 (28) 0x240002b13:0x46:0x0 515345 (28) 0x240002b13:0x2c:0x0 515410 (28) 0x240002b13:0x6d:0x0 515385 (28) 0x240002b13:0x54:0x0 515472 (48) 0x240002b13:0xfc:0x0 3253 (48) 0x240002b13:0xea:0x0 515468 (48) 0x240002b13:0xfa:0x0 3208 (48) 0x240002b13:0xbd:0x0 515539 (48) 0x240002b13:0x11d:0x0 3211 (48) 0x240002b13:0xc0:0x0 32046 (48) 0x240002b13:0x714:0x0 32048 (32) 0x240002b13:0x715:0x0 32093 (28) 0x240002b15:0x26:0x0 32102 (28) 0x240002b15:0x2f:0x0 32117 (28) 0x240002b15:0x3e:0x0 32138 (28) 0x240002b15:0x53:0x0 32140 (28) 0x240002b15:0x55:0x0 32146 (28) 0x240002b15:0x5b:0x0 32153 (28) 0x240002b15:0x62:0x0 32157 (28) 0x240002b15:0x66:0x0 32202 (32) 0x240002b13:0x13cf:0x0 32215 (32) 0x240002b13:0x1e80:0x0 32218 (32) 0x240002b13:0x1e95:0x0 32226 (1812) 0x240002b13:0x1fb2:0x0 64012 (32) 0x240000401:0x16b:0x0 192010 (28) 0x2400013a0:0x6:0x0 192001 (32) 0x240000401:0x18f:0x0 512108 (48) 0x240000404:0x2:0x0 32011 (48) 0x240000406:0x5a:0x0 512213 (48) 0x240000407:0x1:0x0 32013 (32) 0x240000401:0x15e:0x0 192006 (32) 0x240000401:0x192:0x0 512148 (32) 0x240000401:0x14c:0x0 224018 (28) 0x2400013a0:0x22:0x0 224020 (28) 0x2400013a0:0x24:0x0 256001 (28) 0x2400013a0:0x25:0x0 256003 (28) 0x2400013a0:0x27:0x0 64023 (28) 0x2400013a2:0x28:0x0 512369 (28) 0x240001b72:0xf:0x0 160023 (28) 0x240001b73:0x5:0x0 64026 (28) 0x240001b73:0x1d:0x0 160026 (28) 0x240001b73:0x20:0x0 224026 (28) 0x240001b73:0x22:0x0 288013 (28) 0x240001b73:0x2c:0x0 320001 (28) 0x240001b73:0x51:0x0 512432 (28) 0x240001b73:0x57:0x0 512440 (28) 0x240001b73:0x61:0x0 64030 (28) 0x240001b73:0x6c:0x0 224029 (28) 0x240001b72:0x62:0x0 512458 (32) 0x240001b74:0x1b5f:0x0 512483 (28) 0x240001b73:0x6d:0x0 32036 (48) 0x240001b78:0x569c:0x0 514748 (32) 0x240001b78:0x5b1e:0x0 514672 (32) 0x240001b78:0x5ad2:0x0 514573 (32) 0x240001b78:0x5abc:0x0 514715 (32) 0x240001b78:0x5afd:0x0 514749 (32) 0x240001b78:0x5b1f:0x0 514673 (32) 0x240001b78:0x5ad3:0x0 514783 (28) 0x240001b79:0x92:0x0 514905 (28) 0x240001b79:0x8b:0x0 514999 (28) 0x240001b79:0xa3:0x0 515138 (28) 0x240001b79:0xc2:0x0 515163 (28) 0x240001b79:0xc9:0x0 515243 (32) 0x240001b7b:0x7fd:0x0 515250 (28) 0x240002b10:0x10:0x0 515252 (32) 0x240002b11:0x9aa:0x0 515355 (28) 0x240002b13:0x36:0x0 515392 (28) 0x240002b13:0x5b:0x0 515386 (28) 0x240002b13:0x55:0x0 515390 (28) 0x240002b13:0x59:0x0 515398 (28) 0x240002b13:0x61:0x0 515381 (28) 0x240002b13:0x50:0x0 515336 (28) 0x240002b13:0x23:0x0 515460 (48) 0x240002b13:0xf7:0x0 3252 (48) 0x240002b13:0xe9:0x0 515476 (48) 0x240002b13:0xfe:0x0 515482 (48) 0x240002b13:0x100:0x0 3249 (48) 0x240002b13:0xe6:0x0 515537 (48) 0x240002b13:0x11b:0x0 515450 (48) 0x240002b13:0xf3:0x0 32070 (28) 0x240002b15:0xf:0x0 32071 (28) 0x240002b15:0x10:0x0 32084 (28) 0x240002b15:0x1d:0x0 32085 (28) 0x240002b15:0x1e:0x0 32103 (28) 0x240002b15:0x30:0x0 32107 (28) 0x240002b15:0x34:0x0 32123 (28) 0x240002b15:0x44:0x0 32125 (28) 0x240002b15:0x46:0x0 32137 (28) 0x240002b15:0x52:0x0 32160 (28) 0x240002b15:0x69:0x0 32189 (1972) 0x240002b15:0x86:0x0 96026 (28) 0x240001b73:0x1e:0x0 224006 (28) 0x2400013a0:0x16:0x0 224013 (28) 0x2400013a0:0x1d:0x0 128008 (32) 0x240000401:0x17a:0x0 512451 (28) 0x240001b73:0x69:0x0 192020 (28) 0x2400013a0:0x10:0x0 512420 (28) 0x240001b73:0x45:0x0 96007 (32) 0x240000401:0x171:0x0 192008 (28) 0x2400013a0:0x4:0x0 512434 (28) 0x240001b73:0x59:0x0 160028 (28) 0x240001b73:0x4a:0x0 288021 (28) 0x240001b73:0x3e:0x0 224015 (28) 0x2400013a0:0x1f:0x0 256008 (28) 0x2400013a0:0x2c:0x0 512159 (32) 0x240000401:0x14e:0x0 512468 (32) 0x240001b74:0x1b6f:0x0 512471 (32) 0x240001b74:0x1b76:0x0 512472 (32) 0x240001b74:0x1b78:0x0 514532 (32) 0x240001b78:0x5a93:0x0 514656 (32) 0x240001b78:0x5ac2:0x0 514655 (32) 0x240001b78:0x5ac1:0x0 514693 (32) 0x240001b78:0x5ae7:0x0 514694 (32) 0x240001b78:0x5ae8:0x0 514721 (32) 0x240001b78:0x5b03:0x0 514683 (32) 0x240001b78:0x5add:0x0 514844 (28) 0x240001b79:0x76:0x0 514874 (28) 0x240001b79:0x7f:0x0 514953 (28) 0x240001b79:0x94:0x0 515021 (28) 0x240001b79:0xa7:0x0 514861 (28) 0x240001b79:0x7a:0x0 514811 (28) 0x240001b79:0x82:0x0 514879 (28) 0x240001b79:0x84:0x0 514880 (28) 0x240001b79:0x85:0x0 514914 (28) 0x240001b79:0x8c:0x0 514934 (28) 0x240001b79:0x91:0x0 515028 (28) 0x240001b79:0xa8:0x0 515036 (28) 0x240001b79:0xad:0x0 515051 (28) 0x240001b79:0xb3:0x0 515083 (28) 0x240001b79:0xb8:0x0 515206 (28) 0x240001b7b:0x9:0x0 515239 (28) 0x240001b7b:0x23:0x0 515272 (28) 0x240002b10:0x1d:0x0 515286 (28) 0x240002b13:0xe:0x0 515290 (28) 0x240002b13:0x10:0x0 515328 (28) 0x240002b13:0x1b:0x0 515364 (28) 0x240002b13:0x3f:0x0 515515 (28) 0x240002b13:0xaa:0x0 515499 (28) 0x240002b13:0xa2:0x0 515382 (28) 0x240002b13:0x51:0x0 515366 (28) 0x240002b13:0x41:0x0 515421 (28) 0x240002b13:0x78:0x0 515341 (28) 0x240002b13:0x28:0x0 515479 (28) 0x240002b13:0x98:0x0 515407 (28) 0x240002b13:0x6a:0x0 515502 (48) 0x240002b13:0x108:0x0 3257 (48) 0x240002b13:0xee:0x0 515538 (48) 0x240002b13:0x11c:0x0 3238 (48) 0x240002b13:0xdb:0x0 3234 (48) 0x240002b13:0xd7:0x0 32056 (28) 0x240002b15:0x1:0x0 32104 (28) 0x240002b15:0x31:0x0 32105 (28) 0x240002b15:0x32:0x0 32112 (28) 0x240002b15:0x39:0x0 32130 (28) 0x240002b15:0x4b:0x0 32162 (28) 0x240002b15:0x6b:0x0 32183 (28) 0x240002b15:0x80:0x0 32191 (28) 0x240002b15:0x88:0x0 32203 (2068) 0x240002b13:0x13d2:0x0 512419 (28) 0x240001b73:0x44:0x0 192013 (28) 0x2400013a0:0x9:0x0 224027 (28) 0x240001b73:0x37:0x0 96023 (28) 0x240001b73:0x1:0x0 32009 (32) 0x240000401:0x153:0x0 32023 (28) 0x240001b70:0x1:0x0 512150 (48) 0x240000404:0x7e7:0x0 288028 (28) 0x240001b73:0x50:0x0 288001 (28) 0x240001b73:0x18:0x0 512114 (48) 0x240000404:0x4:0x0 192019 (28) 0x2400013a0:0xf:0x0 224007 (28) 0x2400013a0:0x17:0x0 160030 (48) 0x240001b72:0x5f:0x0 512490 (28) 0x240001b73:0x74:0x0 512496 (32) 0x240001b74:0x1b88:0x0 512517 (32) 0x240001b77:0x2ee2:0x0 514557 (32) 0x240001b78:0x5aac:0x0 514733 (32) 0x240001b78:0x5b0f:0x0 514742 (32) 0x240001b78:0x5b18:0x0 514744 (32) 0x240001b78:0x5b1a:0x0 514730 (32) 0x240001b78:0x5b0c:0x0 514777 (48) 0x240001b78:0x5bd2:0x0 514791 (28) 0x240001b79:0x9a:0x0 515093 (28) 0x240001b79:0xbc:0x0 515105 (28) 0x240001b79:0xbf:0x0 515258 (32) 0x240002b11:0x9d4:0x0 515266 (28) 0x240002b10:0x19:0x0 515291 (28) 0x240002b13:0x15:0x0 515306 (28) 0x240002b10:0x32:0x0 515487 (28) 0x240002b13:0x9c:0x0 515395 (28) 0x240002b13:0x5e:0x0 515342 (28) 0x240002b13:0x29:0x0 515495 (28) 0x240002b13:0xa0:0x0 515362 (28) 0x240002b13:0x3d:0x0 515477 (28) 0x240002b13:0x97:0x0 515363 (28) 0x240002b13:0x3e:0x0 515509 (28) 0x240002b13:0xa7:0x0 515534 (48) 0x240002b13:0x118:0x0 3251 (48) 0x240002b13:0xe8:0x0 3213 (48) 0x240002b13:0xc2:0x0 515500 (48) 0x240002b13:0x107:0x0 32050 (32) 0x240002b13:0x729:0x0 32072 (28) 0x240002b15:0x11:0x0 32073 (28) 0x240002b15:0x12:0x0 32074 (28) 0x240002b15:0x13:0x0 32078 (28) 0x240002b15:0x17:0x0 32082 (28) 0x240002b15:0x1b:0x0 32113 (28) 0x240002b15:0x3a:0x0 32133 (28) 0x240002b15:0x4e:0x0 32148 (28) 0x240002b15:0x5d:0x0 32156 (76) 0x240002b15:0x65:0x0 32221 (2420) 0x240002b13:0xb0f:0x0 515048 (28) 0x240001b79:0xb0:0x0 514814 (28) 0x240001b79:0x86:0x0 96003 (32) 0x240000401:0x16f:0x0 514664 (32) 0x240001b78:0x5aca:0x0 224016 (28) 0x2400013a0:0x20:0x0 514663 (32) 0x240001b78:0x5ac9:0x0 515050 (28) 0x240001b79:0xb2:0x0 514735 (32) 0x240001b78:0x5b11:0x0 514679 (32) 0x240001b78:0x5ad9:0x0 160005 (32) 0x240000401:0x185:0x0 64015 (32) 0x240000401:0x16d:0x0 32030 (28) 0x240001b73:0x6b:0x0 192007 (28) 0x2400013a0:0x1:0x0 64007 (32) 0x240000401:0x168:0x0 514653 (32) 0x240001b78:0x5abf:0x0 64022 (28) 0x240001b70:0x4:0x0 512149 (32) 0x240000401:0x14d:0x0 64017 (32) 0x240000401:0x16e:0x0 514660 (32) 0x240001b78:0x5ac6:0x0 512487 (28) 0x240001b73:0x71:0x0 64014 (32) 0x240000401:0x16c:0x0 128024 (28) 0x240001b72:0xb:0x0 512488 (28) 0x240001b73:0x72:0x0 256017 (28) 0x2400013a2:0x20:0x0 514678 (32) 0x240001b78:0x5ad8:0x0 512491 (32) 0x240001b74:0x1b83:0x0 512224 (32) 0x240000401:0x162:0x0 514722 (32) 0x240001b78:0x5b04:0x0 514990 (32) 0x240001b78:0x5cba:0x0 128006 (32) 0x240000401:0x179:0x0 515285 (28) 0x240002b10:0x24:0x0 515292 (28) 0x240002b10:0x31:0x0 515447 (28) 0x240002b13:0x88:0x0 515388 (28) 0x240002b13:0x57:0x0 515387 (28) 0x240002b13:0x56:0x0 515469 (28) 0x240002b13:0x93:0x0 515433 (28) 0x240002b13:0x81:0x0 515401 (28) 0x240002b13:0x64:0x0 515455 (28) 0x240002b13:0x8c:0x0 515457 (28) 0x240002b13:0x8d:0x0 515406 (28) 0x240002b13:0x69:0x0 3239 (48) 0x240002b13:0xdc:0x0 3230 (48) 0x240002b13:0xd3:0x0 3218 (48) 0x240002b13:0xc7:0x0 515492 (48) 0x240002b13:0x104:0x0 3212 (48) 0x240002b13:0xc1:0x0 3228 (48) 0x240002b13:0xd1:0x0 515494 (48) 0x240002b13:0x105:0x0 515540 (48) 0x240002b13:0x11e:0x0 515524 (48) 0x240002b13:0x112:0x0 515486 (48) 0x240002b13:0x101:0x0 515541 (48) 0x240002b13:0x11f:0x0 3235 (48) 0x240002b13:0xd8:0x0 32042 (32) 0x240002b13:0x12d:0x0 32087 (28) 0x240002b15:0x20:0x0 32116 (28) 0x240002b15:0x3d:0x0 32136 (28) 0x240002b15:0x51:0x0 32139 (28) 0x240002b15:0x54:0x0 32142 (28) 0x240002b15:0x57:0x0 32144 (28) 0x240002b15:0x59:0x0 32167 (28) 0x240002b15:0x70:0x0 32178 (28) 0x240002b15:0x7b:0x0 32188 (28) 0x240002b15:0x85:0x0 32192 (28) 0x240002b15:0x89:0x0 32222 (1984) 0x240002b13:0x1fa6:0x0 514702 (32) 0x240001b78:0x5af0:0x0 514773 (48) 0x240001b78:0x5bd0:0x0 514729 (32) 0x240001b78:0x5b0b:0x0 32028 (28) 0x240001b73:0x46:0x0 514691 (32) 0x240001b78:0x5ae5:0x0 512442 (28) 0x240001b73:0x62:0x0 64031 (32) 0x240001b74:0x1baf:0x0 192023 (28) 0x240001b73:0x9:0x0 514790 (28) 0x240001b79:0x87:0x0 288003 (28) 0x240001b73:0x1a:0x0 514699 (32) 0x240001b78:0x5aed:0x0 64021 (28) 0x2400013a2:0x4:0x0 514549 (32) 0x240001b78:0x5aa4:0x0 288026 (28) 0x240001b73:0x4e:0x0 128020 (32) 0x240000401:0x18b:0x0 512429 (28) 0x240001b73:0x55:0x0 514690 (32) 0x240001b78:0x5ae4:0x0 514666 (32) 0x240001b78:0x5acc:0x0 160019 (32) 0x240000401:0x18e:0x0 224008 (28) 0x2400013a0:0x18:0x0 288025 (28) 0x240001b73:0x42:0x0 514540 (32) 0x240001b78:0x5a9b:0x0 224001 (28) 0x2400013a0:0x11:0x0 32005 (28) 0x240000401:0x2:0x0 32027 (28) 0x240001b73:0x31:0x0 514669 (32) 0x240001b78:0x5acf:0x0 192026 (28) 0x240001b73:0x21:0x0 512129 (48) 0x240000404:0x9:0x0 160017 (32) 0x240000401:0x18d:0x0 128015 (32) 0x240000401:0x17e:0x0 320003 (28) 0x240001b73:0x53:0x0 514707 (32) 0x240001b78:0x5af5:0x0 515053 (28) 0x240001b79:0xb5:0x0 515104 (28) 0x240001b79:0xbe:0x0 515147 (28) 0x240001b79:0xc5:0x0 515159 (28) 0x240001b79:0xc8:0x0 515194 (32) 0x240001b7a:0x138a:0x0 515209 (28) 0x240001b79:0xd5:0x0 515237 (28) 0x240001b7b:0x21:0x0 515271 (28) 0x240002b10:0x1c:0x0 515287 (28) 0x240002b13:0xf:0x0 515289 (28) 0x240002b10:0x27:0x0 515307 (28) 0x240002b10:0x33:0x0 32040 (48) 0x240002b13:0xb1:0x0 515397 (28) 0x240002b13:0x60:0x0 515412 (28) 0x240002b13:0x6f:0x0 515359 (28) 0x240002b13:0x3a:0x0 515485 (28) 0x240002b13:0x9b:0x0 515370 (28) 0x240002b13:0x45:0x0 515334 (28) 0x240002b13:0x21:0x0 515419 (28) 0x240002b13:0x76:0x0 515507 (28) 0x240002b13:0xa6:0x0 515333 (28) 0x240002b13:0x20:0x0 515529 (48) 0x240002b13:0x113:0x0 3236 (48) 0x240002b13:0xd9:0x0 3233 (48) 0x240002b13:0xd6:0x0 3250 (48) 0x240002b13:0xe7:0x0 3217 (48) 0x240002b13:0xc6:0x0 515474 (48) 0x240002b13:0xfd:0x0 515547 (48) 0x240002b13:0x51e:0x0 515550 (48) 0x240002b13:0x520:0x0 32055 (32) 0x240002b13:0xb7a:0x0 32066 (28) 0x240002b15:0xb:0x0 32068 (28) 0x240002b15:0xd:0x0 32076 (28) 0x240002b15:0x15:0x0 32081 (28) 0x240002b15:0x1a:0x0 32152 (28) 0x240002b15:0x61:0x0 32173 (28) 0x240002b15:0x76:0x0 32182 (1904) 0x240002b15:0x7f:0x0 160027 (28) 0x240001b73:0x35:0x0 512144 (32) 0x240000404:0x7e3:0x0 128025 (28) 0x240001b72:0x18:0x0 514877 (28) 0x240001b79:0x81:0x0 514667 (32) 0x240001b78:0x5acd:0x0 256012 (28) 0x2400013a2:0x1c:0x0 514674 (32) 0x240001b78:0x5ad4:0x0 514900 (28) 0x240001b79:0x88:0x0 514717 (32) 0x240001b78:0x5aff:0x0 515151 (28) 0x240001b79:0xc6:0x0 514521 (48) 0x240001b78:0x5a8a:0x0 514772 (28) 0x240001b79:0x7e:0x0 514560 (32) 0x240001b78:0x5aaf:0x0 515143 (28) 0x240001b79:0xc4:0x0 514711 (32) 0x240001b78:0x5af9:0x0 512433 (28) 0x240001b73:0x58:0x0 515031 (28) 0x240001b79:0xab:0x0 224014 (28) 0x2400013a0:0x1e:0x0 514747 (32) 0x240001b78:0x5b1d:0x0 32008 (32) 0x240000401:0x152:0x0 512404 (28) 0x240001b73:0x30:0x0 514654 (32) 0x240001b78:0x5ac0:0x0 224024 (28) 0x240001b73:0xe:0x0 512228 (32) 0x240000401:0x163:0x0 514901 (28) 0x240001b79:0x89:0x0 514736 (32) 0x240001b78:0x5b12:0x0 192012 (28) 0x2400013a0:0x8:0x0 514687 (32) 0x240001b78:0x5ae1:0x0 32031 (32) 0x240001b74:0x1ba4:0x0 256013 (28) 0x2400013a4:0x4:0x0 512444 (28) 0x240001b73:0x64:0x0 514709 (32) 0x240001b78:0x5af7:0x0 512498 (32) 0x240001b74:0x1b9c:0x0 224010 (28) 0x2400013a0:0x1a:0x0 512111 (48) 0x240000404:0x3:0x0 514708 (32) 0x240001b78:0x5af6:0x0 288017 (28) 0x240001b73:0x3a:0x0 515188 (28) 0x240001b79:0xd0:0x0 515192 (32) 0x240001b78:0x5cbc:0x0 515249 (28) 0x240002b10:0x5:0x0 515274 (28) 0x240002b10:0x1e:0x0 515265 (28) 0x240002b10:0x22:0x0 515332 (28) 0x240002b13:0x1f:0x0 515415 (28) 0x240002b13:0x72:0x0 515349 (28) 0x240002b13:0x30:0x0 515427 (28) 0x240002b13:0x7e:0x0 515394 (28) 0x240002b13:0x5d:0x0 515346 (28) 0x240002b13:0x2d:0x0 515396 (28) 0x240002b13:0x5f:0x0 515542 (48) 0x240002b13:0x120:0x0 3241 (48) 0x240002b13:0xde:0x0 3240 (48) 0x240002b13:0xdd:0x0 515520 (48) 0x240002b13:0x110:0x0 515448 (48) 0x240002b13:0xf2:0x0 515498 (48) 0x240002b13:0x106:0x0 32101 (28) 0x240002b15:0x2e:0x0 32114 (28) 0x240002b15:0x3b:0x0 32149 (28) 0x240002b15:0x5e:0x0 32164 (28) 0x240002b15:0x6d:0x0 32169 (28) 0x240002b15:0x72:0x0 32219 (2188) 0x240002b13:0x1e97:0x0 514692 (32) 0x240001b78:0x5ae6:0x0 64003 (32) 0x240000401:0x166:0x0 514754 (48) 0x240001b78:0x5bc2:0x0 512162 (32) 0x240000401:0x151:0x0 514552 (32) 0x240001b78:0x5aa7:0x0 192027 (28) 0x240001b73:0x36:0x0 514758 (48) 0x240001b78:0x5bc3:0x0 515167 (28) 0x240001b79:0xca:0x0 514831 (28) 0x240001b79:0x71:0x0 514794 (28) 0x240001b79:0x74:0x0 64024 (28) 0x240001b72:0x1:0x0 514741 (32) 0x240001b78:0x5b17:0x0 514657 (32) 0x240001b78:0x5ac3:0x0 514740 (32) 0x240001b78:0x5b16:0x0 256023 (28) 0x240001b73:0x15:0x0 192024 (28) 0x240001b73:0xa:0x0 514972 (28) 0x240001b79:0x98:0x0 192014 (28) 0x2400013a0:0xa:0x0 96011 (32) 0x240000401:0x173:0x0 256002 (28) 0x2400013a0:0x26:0x0 514670 (32) 0x240001b78:0x5ad0:0x0 515044 (28) 0x240001b79:0xaf:0x0 224005 (28) 0x2400013a0:0x15:0x0 32038 (48) 0x240001b78:0x569e:0x0 514563 (32) 0x240001b78:0x5ab2:0x0 288005 (28) 0x240001b73:0x24:0x0 256024 (28) 0x240001b73:0x16:0x0 160021 (28) 0x2400013a0:0x30:0x0 128001 (32) 0x240000401:0x177:0x0 32010 (32) 0x240000401:0x154:0x0 256021 (28) 0x240001b73:0x13:0x0 514724 (32) 0x240001b78:0x5b06:0x0 514682 (32) 0x240001b78:0x5adc:0x0 512470 (32) 0x240001b74:0x1b74:0x0 515435 (28) 0x240002b13:0x82:0x0 515513 (28) 0x240002b13:0xa9:0x0 515418 (28) 0x240002b13:0x75:0x0 3220 (48) 0x240002b13:0xc9:0x0 515488 (48) 0x240002b13:0x102:0x0 3216 (48) 0x240002b13:0xc5:0x0 32044 (32) 0x240002b13:0x131:0x0 32054 (32) 0x240002b13:0xb77:0x0 32060 (28) 0x240002b15:0x5:0x0 32064 (28) 0x240002b15:0x9:0x0 32075 (28) 0x240002b15:0x14:0x0 32079 (28) 0x240002b15:0x18:0x0 32090 (28) 0x240002b15:0x23:0x0 32118 (28) 0x240002b15:0x3f:0x0 32132 (28) 0x240002b15:0x4d:0x0 32145 (28) 0x240002b15:0x5a:0x0 32150 (28) 0x240002b15:0x5f:0x0 32170 (28) 0x240002b15:0x73:0x0 32179 (28) 0x240002b15:0x7c:0x0 32193 (28) 0x240002b15:0x8a:0x0 32205 (48) 0x240002b13:0x13d6:0x0 32213 (48) 0x240002b13:0x1df7:0x0 32224 (2300) 0x240002b10:0x96:0x0 3207 (48) 0x240002b13:0xbc:0x0 514931 (28) 0x240001b79:0x8f:0x0 514652 (28) 0x240001b7b:0x12:0x0 515530 (48) 0x240002b13:0x114:0x0 515365 (28) 0x240002b13:0x40:0x0 515193 (28) 0x240001b7a:0x1:0x0 515405 (28) 0x240002b13:0x68:0x0 3226 (48) 0x240002b13:0xcf:0x0 515310 (28) 0x240002b10:0x36:0x0 64032 (32) 0x240001b74:0x1bb1:0x0 515312 (28) 0x240002b10:0x37:0x0 514866 (28) 0x240001b79:0x7c:0x0 160012 (32) 0x240000401:0x189:0x0 256028 (28) 0x240001b73:0x4d:0x0 514751 (32) 0x240001b78:0x5b21:0x0 64028 (28) 0x240001b73:0x47:0x0 32067 (32) 0x240002b15:0xc:0x0 515269 (28) 0x240002b10:0x1b:0x0 514815 (28) 0x240001b79:0x8e:0x0 515142 (28) 0x240001b79:0xc3:0x0 515187 (28) 0x240001b79:0xcf:0x0 96016 (32) 0x240000401:0x176:0x0 515508 (48) 0x240002b13:0x10a:0x0 514684 (32) 0x240001b78:0x5ade:0x0 3258 (28) 0x240002b13:0xef:0x0 512160 (32) 0x240000401:0x14f:0x0 515049 (28) 0x240001b79:0xb1:0x0 514701 (32) 0x240001b78:0x5aef:0x0 515010 (28) 0x240001b79:0xa5:0x0 515253 (32) 0x240002b11:0x9b3:0x0 514551 (32) 0x240001b78:0x5aa6:0x0 515510 (48) 0x240002b13:0x10b:0x0 224012 (28) 0x2400013a0:0x1c:0x0 515338 (28) 0x240002b13:0x25:0x0 64027 (28) 0x240001b73:0x32:0x0 96025 (28) 0x240001b72:0x16:0x0 224009 (28) 0x2400013a0:0x19:0x0 64029 (28) 0x240001b73:0x5c:0x0 128023 (28) 0x240001b72:0xa:0x0 515380 (28) 0x240002b13:0x4f:0x0 256015 (28) 0x2400013a2:0x1f:0x0 3242 (48) 0x240002b13:0xdf:0x0 514695 (32) 0x240001b78:0x5ae9:0x0 512328 (32) 0x240000401:0x196:0x0 128026 (28) 0x240001b73:0x1f:0x0 512156 (48) 0x240000404:0x7e9:0x0 515467 (28) 0x240002b13:0x92:0x0 515481 (28) 0x240002b13:0x99:0x0 515446 (48) 0x240002b13:0xf1:0x0 514819 (28) 0x240001b79:0x99:0x0 514734 (32) 0x240001b78:0x5b10:0x0 515459 (28) 0x240002b13:0x8e:0x0 515029 (28) 0x240001b79:0xa9:0x0 514659 (32) 0x240001b78:0x5ac5:0x0 512327 (32) 0x240000401:0x195:0x0 515379 (28) 0x240002b13:0x4e:0x0 512492 (48) 0x240001b74:0x1b86:0x0 128022 (28) 0x240001b72:0x8:0x0 512436 (28) 0x240001b73:0x5a:0x0 515222 (28) 0x240001b7b:0x14:0x0 515466 (48) 0x240002b13:0xf9:0x0 515091 (28) 0x240001b79:0xba:0x0 515245 (32) 0x240001b7b:0x801:0x0 32088 (28) 0x240002b15:0x21:0x0 32097 (28) 0x240002b15:0x2a:0x0 32161 (28) 0x240002b15:0x6a:0x0 32165 (28) 0x240002b15:0x6e:0x0 32168 (28) 0x240002b15:0x71:0x0 32196 (32) 0x240002b13:0xc67:0x0 32209 (1896) 0x240002b13:0x1ce9:0x0 3214 (48) 0x240002b13:0xc3:0x0 514547 (32) 0x240001b78:0x5aa2:0x0 514713 (32) 0x240001b78:0x5afb:0x0 256010 (28) 0x2400013a4:0x1:0x0 160014 (32) 0x240000401:0x18c:0x0 288022 (28) 0x240001b73:0x3f:0x0 256007 (28) 0x2400013a0:0x2b:0x0 515361 (28) 0x240002b13:0x3c:0x0 514698 (32) 0x240001b78:0x5aec:0x0 514697 (32) 0x240001b78:0x5aeb:0x0 2203 (48) 0x240002b13:0xbb:0x0 515190 (28) 0x240001b79:0xd2:0x0 515579 (32) 0x240002b13:0x84c:0x0 512147 (32) 0x240000404:0x7e6:0x0 512460 (32) 0x240001b74:0x1b60:0x0 192016 (28) 0x2400013a0:0xc:0x0 288012 (28) 0x240001b73:0x2b:0x0 514725 (32) 0x240001b78:0x5b07:0x0 514770 (48) 0x240001b78:0x5c7f:0x0 515456 (48) 0x240002b13:0xf6:0x0 515347 (28) 0x240002b13:0x2e:0x0 515330 (28) 0x240002b13:0x1d:0x0 515308 (28) 0x240002b10:0x34:0x0 515445 (28) 0x240002b13:0x87:0x0 32061 (32) 0x240002b15:0x6:0x0 514789 (28) 0x240001b79:0x90:0x0 515351 (28) 0x240002b13:0x32:0x0 32077 (28) 0x240002b15:0x16:0x0 32091 (36) 0x240002b15:0x24:0x0 514700 (32) 0x240001b78:0x5aee:0x0 515329 (28) 0x240002b13:0x1c:0x0 256006 (28) 0x2400013a0:0x2a:0x0 192009 (28) 0x2400013a0:0x5:0x0 515416 (28) 0x240002b13:0x73:0x0 515367 (28) 0x240002b13:0x42:0x0 224028 (28) 0x240001b73:0x4c:0x0 512325 (32) 0x240000401:0x193:0x0 515350 (28) 0x240002b13:0x31:0x0 514562 (32) 0x240001b78:0x5ab1:0x0 514716 (32) 0x240001b78:0x5afe:0x0 32032 (48) 0x240001b74:0x1bae:0x0 514688 (32) 0x240001b78:0x5ae2:0x0 256020 (28) 0x240001b73:0x12:0x0 515554 (32) 0x240002b13:0x711:0x0 515054 (28) 0x240001b79:0xb6:0x0 288024 (28) 0x240001b73:0x41:0x0 160007 (32) 0x240000401:0x186:0x0 514559 (32) 0x240001b78:0x5aae:0x0 515518 (48) 0x240002b13:0x10f:0x0 96027 (28) 0x240001b73:0x33:0x0 515483 (28) 0x240002b13:0x9a:0x0 515552 (32) 0x240002b13:0x69c:0x0 3222 (48) 0x240002b13:0xcb:0x0 515348 (28) 0x240002b13:0x2f:0x0 514839 (28) 0x240001b79:0x73:0x0 515174 (28) 0x240001b79:0xcb:0x0 515357 (28) 0x240002b13:0x38:0x0 515403 (28) 0x240002b13:0x66:0x0 515461 (28) 0x240002b13:0x8f:0x0 515478 (48) 0x240002b13:0xff:0x0 515426 (28) 0x240002b13:0x7d:0x0 3247 (48) 0x240002b13:0xe4:0x0 32124 (28) 0x240002b15:0x45:0x0 32126 (28) 0x240002b15:0x47:0x0 32131 (28) 0x240002b15:0x4c:0x0 32159 (28) 0x240002b15:0x68:0x0 32171 (28) 0x240002b15:0x74:0x0 32174 (28) 0x240002b15:0x77:0x0 32194 (32) 0x240002b13:0xb7c:0x0 32195 (32) 0x240002b13:0xc63:0x0 32211 (48) 0x240002b13:0x1df6:0x0 32041 (1816) 0x240002b13:0x1e85:0x0 32025 (48) 0x240001b72:0x13:0x0 515464 (48) 0x240002b13:0xf8:0x0 512119 (48) 0x240000404:0x6:0x0 256019 (28) 0x240001b73:0x11:0x0 514539 (32) 0x240001b78:0x5a9a:0x0 32034 (32) 0x240001b78:0x5696:0x0 128028 (28) 0x240001b73:0x49:0x0 512430 (28) 0x240001b73:0x56:0x0 514737 (32) 0x240001b78:0x5b13:0x0 515247 (32) 0x240001b7b:0x806:0x0 515503 (28) 0x240002b13:0xa4:0x0 515058 (28) 0x240001b79:0xb7:0x0 515092 (28) 0x240001b79:0xbb:0x0 224022 (28) 0x240001b73:0xc:0x0 515377 (28) 0x240002b13:0x4c:0x0 512153 (48) 0x240000404:0x7e8:0x0 514544 (32) 0x240001b78:0x5a9f:0x0 320002 (28) 0x240001b73:0x52:0x0 515439 (28) 0x240002b13:0x84:0x0 515259 (28) 0x240002b10:0x18:0x0 514802 (28) 0x240001b79:0x96:0x0 514712 (32) 0x240001b78:0x5afa:0x0 514675 (32) 0x240001b78:0x5ad5:0x0 128012 (32) 0x240000401:0x17c:0x0 3223 (48) 0x240002b13:0xcc:0x0 288011 (28) 0x240001b73:0x2a:0x0 96024 (28) 0x240001b72:0x6:0x0 515516 (48) 0x240002b13:0x10e:0x0 32062 (32) 0x240002b15:0x7:0x0 515314 (28) 0x240002b13:0x17:0x0 515453 (28) 0x240002b13:0x8b:0x0 32080 (32) 0x240002b15:0x19:0x0 514686 (32) 0x240001b78:0x5ae0:0x0 515463 (28) 0x240002b13:0x90:0x0 515512 (48) 0x240002b13:0x10c:0x0 512450 (28) 0x240001b73:0x68:0x0 32083 (32) 0x240002b15:0x1c:0x0 514939 (28) 0x240001b79:0x93:0x0 512467 (32) 0x240001b74:0x1b6a:0x0 32096 (32) 0x240002b15:0x29:0x0 515212 (48) 0x240001b7b:0xd:0x0 32021 (48) 0x240000408:0x235d:0x0 515505 (28) 0x240002b13:0xa5:0x0 512105 (48) 0x240000404:0x1:0x0 96009 (32) 0x240000401:0x172:0x0 192011 (28) 0x2400013a0:0x7:0x0 514723 (32) 0x240001b78:0x5b05:0x0 32099 (32) 0x240002b15:0x2c:0x0 96022 (28) 0x240001b70:0x5:0x0 514553 (32) 0x240001b78:0x5aa8:0x0 515309 (28) 0x240002b10:0x35:0x0 32115 (28) 0x240002b15:0x3c:0x0 32134 (36) 0x240002b15:0x4f:0x0 515402 (28) 0x240002b13:0x65:0x0 256026 (28) 0x240001b73:0x23:0x0 514739 (32) 0x240001b78:0x5b15:0x0 3224 (48) 0x240002b13:0xcd:0x0 515240 (28) 0x240001b7b:0x26:0x0 3237 (48) 0x240002b13:0xda:0x0 514738 (32) 0x240001b78:0x5b14:0x0 32141 (28) 0x240002b15:0x56:0x0 32187 (28) 0x240002b15:0x84:0x0 32220 (2032) 0x240002b13:0x1f9a:0x0 160024 (28) 0x240001b73:0x6:0x0 515429 (28) 0x240002b13:0x7f:0x0 32058 (32) 0x240002b15:0x3:0x0 514780 (48) 0x240001b78:0x5bd4:0x0 515511 (28) 0x240002b13:0xa8:0x0 32063 (32) 0x240002b15:0x8:0x0 514775 (48) 0x240001b78:0x5bd1:0x0 32106 (32) 0x240002b15:0x33:0x0 515178 (28) 0x240001b79:0xcc:0x0 515352 (28) 0x240002b13:0x33:0x0 514727 (32) 0x240001b78:0x5b09:0x0 515264 (28) 0x240002b13:0x8:0x0 514767 (28) 0x240001b79:0x80:0x0 515084 (28) 0x240001b79:0xb9:0x0 515424 (28) 0x240002b13:0x7b:0x0 515532 (48) 0x240002b13:0x116:0x0 515376 (28) 0x240002b13:0x4b:0x0 224021 (28) 0x2400013a2:0x16:0x0 515522 (48) 0x240002b13:0x111:0x0 515320 (28) 0x240002b10:0x39:0x0 512326 (32) 0x240000401:0x194:0x0 515400 (28) 0x240002b13:0x63:0x0 515152 (28) 0x240001b79:0xc7:0x0 515125 (28) 0x240001b79:0xc1:0x0 514816 (28) 0x240001b79:0x75:0x0 514858 (28) 0x240001b79:0x79:0x0 515408 (28) 0x240002b13:0x6b:0x0 32108 (32) 0x240002b15:0x35:0x0 3231 (48) 0x240002b13:0xd4:0x0 515451 (28) 0x240002b13:0x8a:0x0 514677 (32) 0x240001b78:0x5ad7:0x0 515261 (28) 0x240002b13:0x6:0x0 96019 (32) 0x240000401:0x182:0x0 64005 (32) 0x240000401:0x167:0x0 515340 (28) 0x240002b13:0x27:0x0 32119 (32) 0x240002b15:0x40:0x0 288015 (28) 0x240001b73:0x2e:0x0 515393 (28) 0x240002b13:0x5c:0x0 514518 (28) 0x240001b79:0x6:0x0 256005 (28) 0x2400013a0:0x29:0x0 514971 (28) 0x240001b79:0x97:0x0 128029 (28) 0x240001b73:0x5e:0x0 512448 (28) 0x240001b73:0x67:0x0 32120 (28) 0x240002b15:0x41:0x0 32122 (36) 0x240002b15:0x43:0x0 512402 (28) 0x240001b73:0x2f:0x0 514710 (32) 0x240001b78:0x5af8:0x0 32045 (32) 0x240002b13:0x134:0x0 3246 (48) 0x240002b13:0xe3:0x0 192003 (32) 0x240000401:0x190:0x0 514696 (32) 0x240001b78:0x5aea:0x0 512443 (28) 0x240001b73:0x63:0x0 515514 (48) 0x240002b13:0x10d:0x0 256025 (28) 0x240001b73:0x17:0x0 514718 (32) 0x240001b78:0x5b00:0x0 512489 (28) 0x240001b73:0x73:0x0 32049 (32) 0x240002b13:0x725:0x0 514681 (32) 0x240001b78:0x5adb:0x0 514771 (48) 0x240001b78:0x5bcf:0x0 514705 (32) 0x240001b78:0x5af3:0x0 514662 (32) 0x240001b78:0x5ac8:0x0 32128 (28) 0x240002b15:0x49:0x0 32129 (28) 0x240002b15:0x4a:0x0 32151 (28) 0x240002b15:0x60:0x0 32163 (28) 0x240002b15:0x6c:0x0 32175 (28) 0x240002b15:0x78:0x0 32185 (28) 0x240002b15:0x82:0x0 32186 (28) 0x240002b15:0x83:0x0 32225 (1948) 0x240002b13:0x1fb1:0x0 514797 (28) 0x240001b79:0x7d:0x0 515183 (28) 0x240001b79:0xce:0x0 2202 (48) 0x240002b13:0xba:0x0 515097 (28) 0x240001b79:0xbd:0x0 514763 (28) 0x240001b79:0x8a:0x0 515425 (28) 0x240002b13:0x7c:0x0 32057 (32) 0x240002b15:0x2:0x0 288008 (28) 0x240001b73:0x27:0x0 515215 (28) 0x240001b7b:0xe:0x0 32098 (32) 0x240002b15:0x2b:0x0 515431 (28) 0x240002b13:0x80:0x0 32100 (32) 0x240002b15:0x2d:0x0 514743 (32) 0x240001b78:0x5b19:0x0 512141 (48) 0x240000404:0x7e2:0x0 192018 (28) 0x2400013a0:0xe:0x0 32121 (32) 0x240002b15:0x42:0x0 515475 (28) 0x240002b13:0x96:0x0 288019 (28) 0x240001b73:0x3c:0x0 514550 (32) 0x240001b78:0x5aa5:0x0 32135 (32) 0x240002b15:0x50:0x0 515217 (48) 0x240001b7b:0x11:0x0 32172 (32) 0x240002b15:0x75:0x0 288004 (28) 0x240001b73:0x1b:0x0 515339 (28) 0x240002b13:0x26:0x0 515321 (28) 0x240002b10:0x3a:0x0 288027 (28) 0x240001b73:0x4f:0x0 515017 (28) 0x240001b79:0xa6:0x0 32176 (32) 0x240002b15:0x79:0x0 288006 (28) 0x240001b73:0x25:0x0 3244 (48) 0x240002b13:0xe1:0x0 32184 (32) 0x240002b15:0x81:0x0 515254 (28) 0x240002b10:0x15:0x0 32201 (32) 0x240002b10:0x94:0x0 515535 (80) 0x240002b13:0x119:0x0 288010 (28) 0x240001b73:0x29:0x0 515527 (28) 0x240002b13:0xb0:0x0 3209 (48) 0x240002b13:0xbe:0x0 514719 (32) 0x240001b78:0x5b01:0x0 256027 (28) 0x240001b73:0x38:0x0 515337 (28) 0x240002b13:0x24:0x0 64019 (32) 0x240000401:0x181:0x0 160009 (64) 0x240000401:0x187:0x0 514534 (32) 0x240001b78:0x5a95:0x0 514745 (64) 0x240001b78:0x5b1b:0x0 514796 (28) 0x240001b79:0x95:0x0 515248 (28) 0x240002b10:0x4:0x0 515121 (28) 0x240001b79:0xc0:0x0 515389 (28) 0x240002b13:0x58:0x0 64020 (32) 0x240000401:0x18a:0x0 514704 (32) 0x240001b78:0x5af2:0x0 514689 (64) 0x240001b78:0x5ae3:0x0 96014 (32) 0x240000401:0x175:0x0 515525 (92) 0x240002b13:0xaf:0x0 515470 (48) 0x240002b13:0xfb:0x0 515413 (28) 0x240002b13:0x70:0x0 3232 (48) 0x240002b13:0xd5:0x0 64001 (2096) 0x240000401:0x157:0x0 3219 (48) 0x240002b13:0xc8:0x0 515030 (28) 0x240001b79:0xaa:0x0 514668 (32) 0x240001b78:0x5ace:0x0 512161 (32) 0x240000401:0x150:0x0 32180 (28) 0x240002b15:0x7d:0x0 32127 (28) 0x240002b15:0x48:0x0 32227 (48) 0x240002b10:0x97:0x0 224004 (28) 0x2400013a0:0x14:0x0 32110 (28) 0x240002b15:0x37:0x0 514720 (32) 0x240001b78:0x5b02:0x0 64009 (32) 0x240000401:0x169:0x0 3225 (48) 0x240002b13:0xce:0x0 514862 (28) 0x240001b79:0x7b:0x0 515378 (28) 0x240002b13:0x4d:0x0 96021 (28) 0x2400013a0:0x2e:0x0 514685 (32) 0x240001b78:0x5adf:0x0 192025 (28) 0x240001b73:0xb:0x0 514661 (32) 0x240001b78:0x5ac7:0x0 514857 (28) 0x240001b79:0x78:0x0 515210 (28) 0x240001b79:0xd6:0x0 514752 (76) 0x240001b79:0x70:0x0 515443 (28) 0x240002b13:0x86:0x0 512417 (28) 0x240001b73:0x43:0x0 515531 (48) 0x240002b13:0x115:0x0 32019 (32) 0x240000401:0x180:0x0 32069 (28) 0x240002b15:0xe:0x0 512486 (28) 0x240001b73:0x70:0x0 32095 (28) 0x240002b15:0x28:0x0 32059 (28) 0x240002b15:0x4:0x0 288020 (28) 0x240001b73:0x3d:0x0 32154 (28) 0x240002b15:0x63:0x0 515521 (28) 0x240002b13:0xad:0x0 512485 (76) 0x240001b73:0x6f:0x0 515465 (28) 0x240002b13:0x91:0x0 515369 (28) 0x240002b13:0x44:0x0 32181 (28) 0x240002b15:0x7e:0x0 515373 (28) 0x240002b13:0x48:0x0 224017 (28) 0x2400013a0:0x21:0x0 32109 (28) 0x240002b15:0x36:0x0 32024 (28) 0x2400013a2:0x24:0x0 514564 (32) 0x240001b78:0x5ab3:0x0 512484 (28) 0x240001b73:0x6e:0x0 32111 (28) 0x240002b15:0x38:0x0 288009 (28) 0x240001b73:0x28:0x0 288018 (28) 0x240001b73:0x3b:0x0 32177 (28) 0x240002b15:0x7a:0x0 514746 (32) 0x240001b78:0x5b1c:0x0 256022 (28) 0x240001b73:0x14:0x0 32043 (32) 0x240002b13:0x12e:0x0 224023 (76) 0x240001b73:0xd:0x0 515490 (48) 0x240002b13:0x103:0x0 514820 (48) 0x240001b78:0x5c0c:0x0 3256 (48) 0x240002b13:0xed:0x0 320004 (28) 0x240001b73:0x54:0x0 96012 (32) 0x240000401:0x174:0x0 515204 (28) 0x240001b7b:0x8:0x0 515452 (48) 0x240002b13:0xf4:0x0 32094 (28) 0x240002b15:0x27:0x0 3254 (48) 0x240002b13:0xeb:0x0 32052 (2076) 0x240002b13:0xb76:0x0 514761 (28) 0x240001b79:0x72:0x0 3227 (48) 0x240002b13:0xd0:0x0 515523 (28) 0x240002b13:0xae:0x0 515241 (80) 0x240001b7b:0x7f7:0x0 514541 (32) 0x240001b78:0x5a9c:0x0 32051 (32) 0x240002b13:0xb11:0x0 514995 (76) 0x240001b79:0xa2:0x0 514542 (32) 0x240001b78:0x5a9d:0x0 32166 (28) 0x240002b15:0x6f:0x0 3243 (48) 0x240002b13:0xe0:0x0 160010 (80) 0x240000401:0x188:0x0 515582 (32) 0x240002b13:0x972:0x0 32065 (28) 0x240002b15:0xa:0x0 96029 (28) 0x240001b73:0x5d:0x0 515189 (28) 0x240001b79:0xd1:0x0 514676 (32) 0x240001b78:0x5ad6:0x0 515536 (48) 0x240002b13:0x11a:0x0 512466 (32) 0x240001b74:0x1b65:0x0 515207 (28) 0x240001b7b:0xa:0x0 514991 (28) 0x240001b79:0xa1:0x0 3221 (48) 0x240002b13:0xca:0x0 515195 (48) 0x240001b7b:0x1:0x0 515473 (28) 0x240002b13:0x95:0x0 32089 (28) 0x240002b15:0x22:0x0 160025 (28) 0x240001b73:0x7:0x0 160003 (32) 0x240000401:0x184:0x0 514728 (32) 0x240001b78:0x5b0a:0x0 514574 (32) 0x240001b78:0x5abd:0x0 32035 (32) 0x240001b78:0x5697:0x0 515506 (48) 0x240002b13:0x109:0x0 32016 (32) 0x240000401:0x165:0x0 192029 (28) 0x240001b73:0x60:0x0 32026 (28) 0x240001b73:0x1c:0x0 3210 (48) 0x240002b13:0xbf:0x0 224011 (28) 0x2400013a0:0x1b:0x0 514703 (32) 0x240001b78:0x5af1:0x0 192005 (32) 0x240000401:0x191:0x0 515344 (28) 0x240002b13:0x2b:0x0 3255 (48) 0x240002b13:0xec:0x0 32147 (28) 0x240002b15:0x5c:0x0 32155 (28) 0x240002b15:0x64:0x0 32158 (28) 0x240002b15:0x67:0x0 288014 (28) 0x240001b73:0x2d:0x0 515553 (28) 0x240002b10:0x92:0x0 514680 (80) 0x240001b78:0x5ada:0x0 512232 (32) 0x240000401:0x164:0x0 515268 (28) 0x240002b10:0x1a:0x0 160022 (28) 0x240001b73:0x4:0x0 32190 (28) 0x240002b15:0x87:0x0 192015 (28) 0x2400013a0:0xb:0x0 515040 (28) 0x240001b79:0xae:0x0 3259 (48) 0x240002b13:0xf0:0x0 514706 (32) 0x240001b78:0x5af4:0x0 96028 (28) 0x240001b73:0x48:0x0 514774 (28) 0x240001b79:0x8d:0x0 515454 (2112) 0x240002b13:0xf5:0x0 224002 (28) 0x2400013a0:0x12:0x0 515497 (28) 0x240002b13:0xa1:0x0 32198 (48) 0x240002b13:0xc69:0x0 512207 (48) 0x240000406:0x1ccd:0x0 515353 (28) 0x240002b13:0x34:0x0 515501 (28) 0x240002b13:0xa3:0x0 512216 (32) 0x240000401:0x160:0x0 512138 (28) 0x240000403:0x1:0x0 515420 (28) 0x240002b13:0x77:0x0 515313 (28) 0x240002b10:0x38:0x0 515052 (28) 0x240001b79:0xb4:0x0 32086 (28) 0x240002b15:0x1f:0x0 288023 (28) 0x240001b73:0x40:0x0 224003 (28) 0x2400013a0:0x13:0x0 32200 (48) 0x240002b13:0xd62:0x0 514750 (32) 0x240001b78:0x5b20:0x0 512445 (28) 0x240001b73:0x65:0x0 514546 (80) 0x240001b78:0x5aa1:0x0 515032 (28) 0x240001b79:0xac:0x0 512126 (48) 0x240000404:0x8:0x0 515198 (28) 0x240001b79:0xd4:0x0 32006 (96) 0x240000405:0x31:0x0 288016 (76) 0x240001b73:0x39:0x0 514658 (32) 0x240001b78:0x5ac4:0x0 3248 (48) 0x240002b13:0xe5:0x0 3229 (48) 0x240002b13:0xd2:0x0 224025 (28) 0x240001b73:0xf:0x0 32092 (28) 0x240002b15:0x25:0x0 515533 (48) 0x240002b13:0x117:0x0 515375 (28) 0x240002b13:0x4a:0x0 515449 (76) 0x240002b13:0x89:0x0 512220 (80) 0x240000401:0x161:0x0 128017 (32) 0x240000401:0x17f:0x0 192022 (28) 0x240001b73:0x8:0x0 188 (28) 0x2400013a2:0x13:0x0 515404 (28) 0x240002b13:0x67:0x0 514565 (32) 0x240001b78:0x5ab4:0x0 256018 (28) 0x240001b73:0x10:0x0 512212 (48) 0x240000406:0x1ccf:0x0 514847 (28) 0x240001b79:0x77:0x0 514878 (28) 0x240001b79:0x83:0x0 512452 (28) 0x240001b73:0x6a:0x0 512132 (96) 0x240000404:0xa:0x0 32143 (28) 0x240002b15:0x58:0x0 3215 (96) 0x240002b13:0xc4:0x0 224019 (28) 0x2400013a0:0x23:0x0 3245 (48) 0x240002b13:0xe2:0x0 515399 (28) 0x240002b13:0x62:0x0 512211 (48) 0x240000406:0x1cce:0x0 288002 (28) 0x240001b73:0x19:0x0 32207 (2072) 0x240002b13:0x13d9:0x0 Stopping /mnt/lustre-mds1 (opts:) on oleg146-server e2fsck -d -v -t -t -f -n /dev/mapper/mds1_flakey -m8 oleg146-server: e2fsck 1.46.2.wc5 (26-Mar-2022) oleg146-server: Use max possible thread num: 2 instead Pass 1: Checking inodes, blocks, and sizes [Thread 0] Scan group range [0, 16) [Thread 1] Scan group range [16, 32) [Thread 1] jumping to group 16 [Thread 1] e2fsck_pass1_run:2564: increase inode 512039 badness 0 to 2 for 10084 [Thread 0] jumping to group 0 [Thread 0] e2fsck_pass1_run:2564: increase inode 78 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 79 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 80 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 81 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 82 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 83 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 84 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 85 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 86 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 87 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 88 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 89 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 90 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 91 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 92 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 93 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 94 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 95 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 96 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 97 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 98 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 99 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 100 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 101 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 102 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 103 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 104 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 105 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 106 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 107 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 108 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 109 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 110 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 111 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 112 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 113 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 114 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 116 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 117 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 118 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 119 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 120 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 121 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 122 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 123 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 124 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 125 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 126 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 127 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 128 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 129 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 130 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 131 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 132 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 133 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 134 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 135 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 136 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 137 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 138 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 139 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 140 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 141 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 142 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 143 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 144 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 145 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 146 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 147 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 148 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 149 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 150 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 151 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 152 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 153 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 154 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 155 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 156 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 157 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 158 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 159 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 160 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 163 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 164 badness 0 to 2 for 10084 [Thread 0] check_blocks:5294: increase inode 548 badness 0 to 1 for 1000c [Thread 0] Inode 548, i_size is 17592186040319, should be 0. [Thread 0] Fix? no [Thread 0] e2fsck_pass1_run:2564: increase inode 551 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 980 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 982 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 1119 badness 0 to 2 for 10084 [Thread 0] group 1 finished [Thread 0] e2fsck_pass1_run:2564: increase inode 32001 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 32002 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 32003 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 32004 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 32005 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 32007 badness 0 to 2 for 10084 [Thread 0] group 2 finished [Thread 0] e2fsck_pass1_run:2564: increase inode 64002 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 64003 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 64004 badness 0 to 2 for 10084 [Thread 0] group 3 finished [Thread 0] group 4 finished [Thread 0] group 5 finished [Thread 0] group 6 finished [Thread 0] group 7 finished [Thread 0] group 8 finished [Thread 0] group 9 finished [Thread 0] group 10 finished [Thread 0] group 11 finished [Thread 0] group 12 finished [Thread 0] group 13 finished [Thread 0] group 14 finished [Thread 0] group 15 finished [Thread 0] group 16 finished [Thread 0] Pass 1: Memory used: 1084k/0k (845k/240k), time: 0.05/ 0.04/ 0.06 [Thread 0] Pass 1: I/O read: 10MB, write: 0MB, rate: 187.91MB/s [Thread 0] Scanned group range [0, 16), inodes 6160 [Thread 1] group 17 finished [Thread 1] group 18 finished [Thread 1] group 19 finished [Thread 1] group 20 finished [Thread 1] group 21 finished [Thread 1] group 22 finished [Thread 1] group 23 finished [Thread 1] group 24 finished [Thread 1] group 25 finished [Thread 1] group 26 finished [Thread 1] group 27 finished [Thread 1] group 28 finished [Thread 1] group 29 finished [Thread 1] group 30 finished [Thread 1] group 31 finished [Thread 1] group 32 finished [Thread 1] Pass 1: Memory used: 1112k/0k (736k/377k), time: 0.08/ 0.04/ 0.08 [Thread 1] Pass 1: I/O read: 28MB, write: 0MB, rate: 334.55MB/s [Thread 1] Scanned group range [16, 32), inodes 20784 Pass 2: Checking directory structure Entry '..' in .../??? (519023) has deleted/unused inode 96153 fid=[0x200005220:0x508:0x0]. Clear? no Entry '..' in .../??? (519024) has deleted/unused inode 96153 fid=[0x200005220:0x508:0x0]. Clear? no Entry '..' in .../??? (519025) has deleted/unused inode 96154 fid=[0x200005220:0x509:0x0]. Clear? no Entry '..' in .../??? (519029) has deleted/unused inode 96154 fid=[0x200005220:0x509:0x0]. Clear? no Entry '..' in .../??? (519030) has deleted/unused inode 96153 fid=[0x200005220:0x508:0x0]. Clear? no Entry '..' in .../??? (519033) has deleted/unused inode 96155 fid=[0x200005220:0x50a:0x0]. Clear? no Entry '..' in .../??? (519035) has deleted/unused inode 96153 fid=[0x200005220:0x508:0x0]. Clear? no Entry '..' in .../??? (519036) has deleted/unused inode 96153 fid=[0x200005220:0x508:0x0]. Clear? no Entry '..' in .../??? (519037) has deleted/unused inode 96154 fid=[0x200005220:0x509:0x0]. Clear? no Entry '..' in .../??? (519043) has deleted/unused inode 96155 fid=[0x200005220:0x50a:0x0]. Clear? no Entry '..' in .../??? (519044) has deleted/unused inode 96154 fid=[0x200005220:0x509:0x0]. Clear? no Entry '..' in .../??? (519045) has deleted/unused inode 96154 fid=[0x200005220:0x509:0x0]. Clear? no Entry '..' in .../??? (519046) has deleted/unused inode 96153 fid=[0x200005220:0x508:0x0]. Clear? no Entry '..' in .../??? (519047) has deleted/unused inode 96153 fid=[0x200005220:0x508:0x0]. Clear? no Entry '..' in .../??? (519048) has deleted/unused inode 96153 fid=[0x200005220:0x508:0x0]. Clear? no Entry '..' in .../??? (519052) has deleted/unused inode 96155 fid=[0x200005220:0x50a:0x0]. Clear? no Entry '..' in .../??? (519058) has deleted/unused inode 96154 fid=[0x200005220:0x509:0x0]. Clear? no Entry '..' in .../??? (519059) has deleted/unused inode 96155 fid=[0x200005220:0x50a:0x0]. Clear? no Entry '..' in .../??? (519061) has deleted/unused inode 96153 fid=[0x200005220:0x508:0x0]. Clear? no Entry '..' in .../??? (519062) has deleted/unused inode 96154 fid=[0x200005220:0x509:0x0]. Clear? no Entry '..' in .../??? (519064) has deleted/unused inode 96153 fid=[0x200005220:0x508:0x0]. Clear? no Entry '..' in .../??? (519066) has deleted/unused inode 96155 fid=[0x200005220:0x50a:0x0]. Clear? no Entry '..' in .../??? (519067) has deleted/unused inode 96153 fid=[0x200005220:0x508:0x0]. Clear? no Entry '..' in .../??? (519068) has deleted/unused inode 96154 fid=[0x200005220:0x509:0x0]. Clear? no Entry '..' in .../??? (519070) has deleted/unused inode 96155 fid=[0x200005220:0x50a:0x0]. Clear? no Entry '..' in .../??? (519072) has deleted/unused inode 96155 fid=[0x200005220:0x50a:0x0]. Clear? no Entry '..' in .../??? (519074) has deleted/unused inode 96154 fid=[0x200005220:0x509:0x0]. Clear? no Pass 2: Memory used: 944k/0k (199k/746k), time: 0.05/ 0.02/ 0.02 Pass 2: I/O read: 14MB, write: 0MB, rate: 292.64MB/s Pass 3: Checking directory connectivity Peak memory: Memory used: 944k/0k (199k/746k), time: 0.20/ 0.12/ 0.10 '..' in /REMOTE_PARENT_DIR/0x200002341:0xbb:0x0/d0 (32165) is /REMOTE_PARENT_DIR (512004), should be /REMOTE_PARENT_DIR/0x200002341:0xbb:0x0 (32172). Fix? no '..' in /REMOTE_PARENT_DIR/0x200002341:0xbb:0x0/d2 (32168) is /REMOTE_PARENT_DIR (512004), should be /REMOTE_PARENT_DIR/0x200002341:0xbb:0x0 (32172). Fix? no '..' in /REMOTE_PARENT_DIR/0x200002341:0xbb:0x0/d4 (32169) is /REMOTE_PARENT_DIR (512004), should be /REMOTE_PARENT_DIR/0x200002341:0xbb:0x0 (32172). Fix? no '..' in /REMOTE_PARENT_DIR/0x200002341:0xbb:0x0/d6 (32170) is /REMOTE_PARENT_DIR (512004), should be /REMOTE_PARENT_DIR/0x200002341:0xbb:0x0 (32172). Fix? no '..' in /REMOTE_PARENT_DIR/0x200002341:0xbb:0x0/d8 (32171) is /REMOTE_PARENT_DIR (512004), should be /REMOTE_PARENT_DIR/0x200002341:0xbb:0x0 (32172). Fix? no Unconnected directory inode 518945 (was in /ROOT/d300p.sanity) Connect to /lost+found? no '..' in /REMOTE_PARENT_DIR/0x200005222:0xad:0x0/d0 (96104) is /REMOTE_PARENT_DIR (512004), should be /REMOTE_PARENT_DIR/0x200005222:0xad:0x0 (32259). Fix? no '..' in /REMOTE_PARENT_DIR/0x200005222:0xbe:0x0/d2 (96105) is /REMOTE_PARENT_DIR (512004), should be /REMOTE_PARENT_DIR/0x200005222:0xbe:0x0 (32260). Fix? no '..' in /REMOTE_PARENT_DIR/0x200005222:0xd6:0x0/d4 (96106) is /REMOTE_PARENT_DIR (512004), should be /REMOTE_PARENT_DIR/0x200005222:0xd6:0x0 (32262). Fix? no '..' in /REMOTE_PARENT_DIR/0x200005222:0xad:0x0/d6 (96107) is /REMOTE_PARENT_DIR (512004), should be /REMOTE_PARENT_DIR/0x200005222:0xad:0x0 (32259). Fix? no '..' in /REMOTE_PARENT_DIR/0x200005222:0x26:0x0/d8 (96108) is /REMOTE_PARENT_DIR (512004), should be /REMOTE_PARENT_DIR/0x200005222:0x26:0x0 (64257). Fix? no '..' in /REMOTE_PARENT_DIR/0x200005222:0x8a:0x0/d10 (96109) is /REMOTE_PARENT_DIR (512004), should be /REMOTE_PARENT_DIR/0x200005222:0x8a:0x0 (32258). Fix? no '..' in /REMOTE_PARENT_DIR/0x200005222:0xd1:0x0/d12 (96110) is /REMOTE_PARENT_DIR (512004), should be /REMOTE_PARENT_DIR/0x200005222:0xd1:0x0 (64258). Fix? no '..' in /REMOTE_PARENT_DIR/0x200005222:0x26:0x0/d16 (96112) is /REMOTE_PARENT_DIR (512004), should be /REMOTE_PARENT_DIR/0x200005222:0x26:0x0 (64257). Fix? no '..' in /REMOTE_PARENT_DIR/0x200005222:0x8a:0x0/d18 (96113) is /REMOTE_PARENT_DIR (512004), should be /REMOTE_PARENT_DIR/0x200005222:0x8a:0x0 (32258). Fix? no '..' in /REMOTE_PARENT_DIR/0x200005222:0xd6:0x0/d20 (96114) is /REMOTE_PARENT_DIR (512004), should be /REMOTE_PARENT_DIR/0x200005222:0xd6:0x0 (32262). Fix? no '..' in /REMOTE_PARENT_DIR/0x200005222:0xdb:0x0/d22 (96115) is /REMOTE_PARENT_DIR (512004), should be /REMOTE_PARENT_DIR/0x200005222:0xdb:0x0 (64260). Fix? no '..' in /REMOTE_PARENT_DIR/0x200005222:0x8a:0x0/d24 (96116) is /REMOTE_PARENT_DIR (512004), should be /REMOTE_PARENT_DIR/0x200005222:0x8a:0x0 (32258). Fix? no '..' in /REMOTE_PARENT_DIR/0x200005222:0x26:0x0/d26 (96117) is /REMOTE_PARENT_DIR (512004), should be /REMOTE_PARENT_DIR/0x200005222:0x26:0x0 (64257). Fix? no '..' in /REMOTE_PARENT_DIR/0x200005222:0xbe:0x0/d28 (96118) is /REMOTE_PARENT_DIR (512004), should be /REMOTE_PARENT_DIR/0x200005222:0xbe:0x0 (32260). Fix? no '..' in /REMOTE_PARENT_DIR/0x200005222:0xcb:0x0/d32 (96120) is /REMOTE_PARENT_DIR (512004), should be /REMOTE_PARENT_DIR/0x200005222:0xcb:0x0 (32261). Fix? no '..' in /REMOTE_PARENT_DIR/0x200005222:0xcb:0x0/d36 (96122) is /REMOTE_PARENT_DIR (512004), should be /REMOTE_PARENT_DIR/0x200005222:0xcb:0x0 (32261). Fix? no '..' in /REMOTE_PARENT_DIR/0x200005222:0xd6:0x0/d38 (96123) is /REMOTE_PARENT_DIR (512004), should be /REMOTE_PARENT_DIR/0x200005222:0xd6:0x0 (32262). Fix? no '..' in /REMOTE_PARENT_DIR/0x200005222:0xd8:0x0/d40 (96124) is /REMOTE_PARENT_DIR (512004), should be /REMOTE_PARENT_DIR/0x200005222:0xd8:0x0 (64259). Fix? no '..' in /REMOTE_PARENT_DIR/0x200005222:0xad:0x0/d42 (96125) is /REMOTE_PARENT_DIR (512004), should be /REMOTE_PARENT_DIR/0x200005222:0xad:0x0 (32259). Fix? no '..' in /REMOTE_PARENT_DIR/0x200005222:0xd1:0x0/d44 (96126) is /REMOTE_PARENT_DIR (512004), should be /REMOTE_PARENT_DIR/0x200005222:0xd1:0x0 (64258). Fix? no '..' in /REMOTE_PARENT_DIR/0x200005222:0xad:0x0/d46 (96127) is /REMOTE_PARENT_DIR (512004), should be /REMOTE_PARENT_DIR/0x200005222:0xad:0x0 (32259). Fix? no '..' in /REMOTE_PARENT_DIR/0x200005222:0xd6:0x0/d48 (96128) is /REMOTE_PARENT_DIR (512004), should be /REMOTE_PARENT_DIR/0x200005222:0xd6:0x0 (32262). Fix? no '..' in /REMOTE_PARENT_DIR/0x200005222:0xcb:0x0/d50 (96129) is /REMOTE_PARENT_DIR (512004), should be /REMOTE_PARENT_DIR/0x200005222:0xcb:0x0 (32261). Fix? no '..' in /REMOTE_PARENT_DIR/0x200005222:0x8a:0x0/d54 (96131) is /REMOTE_PARENT_DIR (512004), should be /REMOTE_PARENT_DIR/0x200005222:0x8a:0x0 (32258). Fix? no '..' in /REMOTE_PARENT_DIR/0x200005222:0xd8:0x0/d58 (96133) is /REMOTE_PARENT_DIR (512004), should be /REMOTE_PARENT_DIR/0x200005222:0xd8:0x0 (64259). Fix? no '..' in /REMOTE_PARENT_DIR/0x200005222:0xad:0x0/d60 (96134) is /REMOTE_PARENT_DIR (512004), should be /REMOTE_PARENT_DIR/0x200005222:0xad:0x0 (32259). Fix? no '..' in /REMOTE_PARENT_DIR/0x200005222:0xd6:0x0/d62 (96135) is /REMOTE_PARENT_DIR (512004), should be /REMOTE_PARENT_DIR/0x200005222:0xd6:0x0 (32262). Fix? no '..' in /REMOTE_PARENT_DIR/0x200005222:0x8a:0x0/d64 (96136) is /REMOTE_PARENT_DIR (512004), should be /REMOTE_PARENT_DIR/0x200005222:0x8a:0x0 (32258). Fix? no '..' in /REMOTE_PARENT_DIR/0x200005222:0x8a:0x0/d66 (96137) is /REMOTE_PARENT_DIR (512004), should be /REMOTE_PARENT_DIR/0x200005222:0x8a:0x0 (32258). Fix? no '..' in /REMOTE_PARENT_DIR/0x200005222:0xd8:0x0/d70 (96139) is /REMOTE_PARENT_DIR (512004), should be /REMOTE_PARENT_DIR/0x200005222:0xd8:0x0 (64259). Fix? no '..' in /REMOTE_PARENT_DIR/0x200005222:0xdb:0x0/d72 (96140) is /REMOTE_PARENT_DIR (512004), should be /REMOTE_PARENT_DIR/0x200005222:0xdb:0x0 (64260). Fix? no '..' in /REMOTE_PARENT_DIR/0x200005222:0xd1:0x0/d74 (96141) is /REMOTE_PARENT_DIR (512004), should be /REMOTE_PARENT_DIR/0x200005222:0xd1:0x0 (64258). Fix? no '..' in /REMOTE_PARENT_DIR/0x200005222:0xcb:0x0/d78 (96143) is /REMOTE_PARENT_DIR (512004), should be /REMOTE_PARENT_DIR/0x200005222:0xcb:0x0 (32261). Fix? no '..' in /REMOTE_PARENT_DIR/0x200005222:0xcb:0x0/d80 (96144) is /REMOTE_PARENT_DIR (512004), should be /REMOTE_PARENT_DIR/0x200005222:0xcb:0x0 (32261). Fix? no '..' in /REMOTE_PARENT_DIR/0x200005222:0xd6:0x0/d82 (96145) is /REMOTE_PARENT_DIR (512004), should be /REMOTE_PARENT_DIR/0x200005222:0xd6:0x0 (32262). Fix? no '..' in /REMOTE_PARENT_DIR/0x200005222:0xd8:0x0/d84 (96146) is /REMOTE_PARENT_DIR (512004), should be /REMOTE_PARENT_DIR/0x200005222:0xd8:0x0 (64259). Fix? no '..' in /REMOTE_PARENT_DIR/0x200005222:0x26:0x0/d86 (96147) is /REMOTE_PARENT_DIR (512004), should be /REMOTE_PARENT_DIR/0x200005222:0x26:0x0 (64257). Fix? no '..' in /REMOTE_PARENT_DIR/0x200005222:0xd1:0x0/d88 (96148) is /REMOTE_PARENT_DIR (512004), should be /REMOTE_PARENT_DIR/0x200005222:0xd1:0x0 (64258). Fix? no '..' in /REMOTE_PARENT_DIR/0x200005222:0xd8:0x0/d90 (96149) is /REMOTE_PARENT_DIR (512004), should be /REMOTE_PARENT_DIR/0x200005222:0xd8:0x0 (64259). Fix? no '..' in /REMOTE_PARENT_DIR/0x200005222:0xcb:0x0/d92 (96150) is /REMOTE_PARENT_DIR (512004), should be /REMOTE_PARENT_DIR/0x200005222:0xcb:0x0 (32261). Fix? no '..' in /REMOTE_PARENT_DIR/0x200005222:0xd6:0x0/d94 (96151) is /REMOTE_PARENT_DIR (512004), should be /REMOTE_PARENT_DIR/0x200005222:0xd6:0x0 (32262). Fix? no '..' in /REMOTE_PARENT_DIR/0x200005222:0xd6:0x0/d96 (96152) is /REMOTE_PARENT_DIR (512004), should be /REMOTE_PARENT_DIR/0x200005222:0xd6:0x0 (32262). Fix? no Unconnected directory inode 512116 (was in /REMOTE_PARENT_DIR/0x200000401:0x147:0x0) Connect to /lost+found? no '..' in ... (512116) is /REMOTE_PARENT_DIR/0x200000401:0x147:0x0 (512111), should be (0). Fix? no '..' in /REMOTE_PARENT_DIR/0x200003ab1:0x3456:0x0 (518464) is /ROOT/d230o.sanity/[0x200003ab1:0x33f1:0x0]:0 (518360), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x200003ab1:0x3457:0x0 (518466) is /ROOT/d230o.sanity/[0x200003ab1:0x33f1:0x0]:0 (518360), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x200003ab1:0x3458:0x0 (518468) is /ROOT/d230o.sanity/[0x200003ab1:0x33f1:0x0]:0 (518360), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x200003ab1:0x3459:0x0 (518470) is /ROOT/d230o.sanity/[0x200003ab1:0x33f1:0x0]:0 (518360), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x200003ab1:0x345b:0x0 (518474) is /ROOT/d230o.sanity/[0x200003ab1:0x33f1:0x0]:0 (518360), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x200003ab1:0x345c:0x0 (518476) is /ROOT/d230o.sanity/[0x200003ab1:0x33f1:0x0]:0 (518360), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x200003ab1:0x345f:0x0 (518482) is /ROOT/d230o.sanity/[0x200003ab1:0x33f1:0x0]:0 (518360), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x200003ab1:0x3461:0x0 (518486) is /ROOT/d230o.sanity/[0x200003ab1:0x33f1:0x0]:0 (518360), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x200003ab1:0x3465:0x0 (518494) is /ROOT/d230o.sanity/[0x200003ab1:0x33f1:0x0]:0 (518360), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x200003ab1:0x3466:0x0 (518496) is /ROOT/d230o.sanity/[0x200003ab1:0x33f1:0x0]:0 (518360), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x200003ab1:0x3467:0x0 (518498) is /ROOT/d230o.sanity/[0x200003ab1:0x33f1:0x0]:0 (518360), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x200003ab1:0x346a:0x0 (518504) is /ROOT/d230o.sanity/[0x200003ab1:0x33f1:0x0]:0 (518360), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x200003ab1:0x346b:0x0 (518506) is /ROOT/d230o.sanity/[0x200003ab1:0x33f1:0x0]:0 (518360), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x200003ab1:0x346c:0x0 (518508) is /ROOT/d230o.sanity/[0x200003ab1:0x33f1:0x0]:0 (518360), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x200003ab1:0x3470:0x0 (518516) is /ROOT/d230o.sanity/[0x200003ab1:0x33f1:0x0]:0 (518360), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x200003ab1:0x3471:0x0 (518518) is /ROOT/d230o.sanity/[0x200003ab1:0x33f1:0x0]:0 (518360), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x200003ab1:0x3472:0x0 (518520) is /ROOT/d230o.sanity/[0x200003ab1:0x33f1:0x0]:0 (518360), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x200003ab1:0x3473:0x0 (518522) is /ROOT/d230o.sanity/[0x200003ab1:0x33f1:0x0]:0 (518360), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x200003ab1:0x3474:0x0 (518524) is /ROOT/d230o.sanity/[0x200003ab1:0x33f1:0x0]:0 (518360), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x200003ab1:0x3477:0x0 (518530) is /ROOT/d230o.sanity/[0x200003ab1:0x33f1:0x0]:0 (518360), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x200003ab1:0x3478:0x0 (518532) is /ROOT/d230o.sanity/[0x200003ab1:0x33f1:0x0]:0 (518360), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x200003ab1:0x347b:0x0 (518538) is /ROOT/d230o.sanity/[0x200003ab1:0x33f1:0x0]:0 (518360), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x200003ab1:0x347c:0x0 (518540) is /ROOT/d230o.sanity/[0x200003ab1:0x33f1:0x0]:0 (518360), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x200003ab1:0x347f:0x0 (518546) is /ROOT/d230o.sanity/[0x200003ab1:0x33f1:0x0]:0 (518360), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x200003ab1:0x3481:0x0 (518550) is /ROOT/d230o.sanity/[0x200003ab1:0x33f1:0x0]:0 (518360), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x200003ab1:0x3484:0x0 (518556) is /ROOT/d230o.sanity/[0x200003ab1:0x33f1:0x0]:0 (518360), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x200003ab1:0x3485:0x0 (518558) is /ROOT/d230o.sanity/[0x200003ab1:0x33f1:0x0]:0 (518360), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x200003ab1:0x3486:0x0 (518560) is /ROOT/d230o.sanity/[0x200003ab1:0x33f1:0x0]:0 (518360), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in ... (518945) is /ROOT/d300p.sanity (518944), should be (0). Fix? no '..' in /ROOT/d300ue.sanity/[0x200005220:0x506:0x0]:0/d2 (519023) is ... (96153), should be /ROOT/d300ue.sanity/[0x200005220:0x506:0x0]:0 (64261). Fix? no '..' in /ROOT/d300ue.sanity/[0x200005220:0x506:0x0]:0/d3 (519024) is ... (96153), should be /ROOT/d300ue.sanity/[0x200005220:0x506:0x0]:0 (64261). Fix? no '..' in /ROOT/d300ue.sanity/[0x200005220:0x506:0x0]:0/d5 (519025) is ... (96154), should be /ROOT/d300ue.sanity/[0x200005220:0x506:0x0]:0 (64261). Fix? no '..' in /ROOT/d300ue.sanity/[0x200005220:0x506:0x0]:0/d7 (519026) is /REMOTE_PARENT_DIR/0x200005222:0x195:0x0 (64262), should be /ROOT/d300ue.sanity/[0x200005220:0x506:0x0]:0 (64261). Fix? no '..' in /ROOT/d300ue.sanity/[0x200005220:0x506:0x0]:0/d10 (519028) is /REMOTE_PARENT_DIR/0x200005222:0x195:0x0 (64262), should be /ROOT/d300ue.sanity/[0x200005220:0x506:0x0]:0 (64261). Fix? no '..' in /ROOT/d300ue.sanity/[0x200005220:0x506:0x0]:0/d12 (519029) is ... (96154), should be /ROOT/d300ue.sanity/[0x200005220:0x506:0x0]:0 (64261). Fix? no '..' in /ROOT/d300ue.sanity/[0x200005220:0x506:0x0]:0/d13 (519030) is ... (96153), should be /ROOT/d300ue.sanity/[0x200005220:0x506:0x0]:0 (64261). Fix? no '..' in /ROOT/d300ue.sanity/[0x200005220:0x506:0x0]:0/d18 (519032) is /REMOTE_PARENT_DIR/0x200005222:0x195:0x0 (64262), should be /ROOT/d300ue.sanity/[0x200005220:0x506:0x0]:0 (64261). Fix? no '..' in /ROOT/d300ue.sanity/[0x200005220:0x506:0x0]:0/d23 (519033) is ... (96155), should be /ROOT/d300ue.sanity/[0x200005220:0x506:0x0]:0 (64261). Fix? no '..' in /ROOT/d300ue.sanity/[0x200005220:0x506:0x0]:0/d24 (519034) is /REMOTE_PARENT_DIR/0x200005222:0x195:0x0 (64262), should be /ROOT/d300ue.sanity/[0x200005220:0x506:0x0]:0 (64261). Fix? no '..' in /ROOT/d300ue.sanity/[0x200005220:0x506:0x0]:0/d25 (519035) is ... (96153), should be /ROOT/d300ue.sanity/[0x200005220:0x506:0x0]:0 (64261). Fix? no '..' in /ROOT/d300ue.sanity/[0x200005220:0x506:0x0]:0/d28 (519036) is ... (96153), should be /ROOT/d300ue.sanity/[0x200005220:0x506:0x0]:0 (64261). Fix? no '..' in /ROOT/d300ue.sanity/[0x200005220:0x506:0x0]:0/d29 (519037) is ... (96154), should be /ROOT/d300ue.sanity/[0x200005220:0x506:0x0]:0 (64261). Fix? no '..' in /ROOT/d300ue.sanity/[0x200005220:0x506:0x0]:0/d31 (519039) is /REMOTE_PARENT_DIR/0x200005222:0x195:0x0 (64262), should be /ROOT/d300ue.sanity/[0x200005220:0x506:0x0]:0 (64261). Fix? no '..' in /ROOT/d300ue.sanity/[0x200005220:0x506:0x0]:0/d33 (519040) is /REMOTE_PARENT_DIR/0x200005222:0x195:0x0 (64262), should be /ROOT/d300ue.sanity/[0x200005220:0x506:0x0]:0 (64261). Fix? no '..' in /ROOT/d300ue.sanity/[0x200005220:0x506:0x0]:0/d35 (519042) is /REMOTE_PARENT_DIR/0x200005222:0x195:0x0 (64262), should be /ROOT/d300ue.sanity/[0x200005220:0x506:0x0]:0 (64261). Fix? no '..' in /ROOT/d300ue.sanity/[0x200005220:0x506:0x0]:0/d40 (519043) is ... (96155), should be /ROOT/d300ue.sanity/[0x200005220:0x506:0x0]:0 (64261). Fix? no '..' in /ROOT/d300ue.sanity/[0x200005220:0x506:0x0]:0/d41 (519044) is ... (96154), should be /ROOT/d300ue.sanity/[0x200005220:0x506:0x0]:0 (64261). Fix? no '..' in /ROOT/d300ue.sanity/[0x200005220:0x506:0x0]:0/d44 (519045) is ... (96154), should be /ROOT/d300ue.sanity/[0x200005220:0x506:0x0]:0 (64261). Fix? no '..' in /ROOT/d300ue.sanity/[0x200005220:0x506:0x0]:0/d45 (519046) is ... (96153), should be /ROOT/d300ue.sanity/[0x200005220:0x506:0x0]:0 (64261). Fix? no '..' in /ROOT/d300ue.sanity/[0x200005220:0x506:0x0]:0/d49 (519047) is ... (96153), should be /ROOT/d300ue.sanity/[0x200005220:0x506:0x0]:0 (64261). Fix? no '..' in /ROOT/d300ue.sanity/[0x200005220:0x506:0x0]:0/d51 (519048) is ... (96153), should be /ROOT/d300ue.sanity/[0x200005220:0x506:0x0]:0 (64261). Fix? no '..' in /ROOT/d300ue.sanity/[0x200005220:0x506:0x0]:0/d54 (519050) is /REMOTE_PARENT_DIR/0x200005222:0x195:0x0 (64262), should be /ROOT/d300ue.sanity/[0x200005220:0x506:0x0]:0 (64261). Fix? no '..' in /ROOT/d300ue.sanity/[0x200005220:0x506:0x0]:0/d58 (519052) is ... (96155), should be /ROOT/d300ue.sanity/[0x200005220:0x506:0x0]:0 (64261). Fix? no '..' in /ROOT/d300ue.sanity/[0x200005220:0x506:0x0]:0/d64 (519054) is /REMOTE_PARENT_DIR/0x200005222:0x195:0x0 (64262), should be /ROOT/d300ue.sanity/[0x200005220:0x506:0x0]:0 (64261). Fix? no '..' in /ROOT/d300ue.sanity/[0x200005220:0x506:0x0]:0/d66 (519056) is /REMOTE_PARENT_DIR/0x200005222:0x195:0x0 (64262), should be /ROOT/d300ue.sanity/[0x200005220:0x506:0x0]:0 (64261). Fix? no '..' in /ROOT/d300ue.sanity/[0x200005220:0x506:0x0]:0/d69 (519058) is ... (96154), should be /ROOT/d300ue.sanity/[0x200005220:0x506:0x0]:0 (64261). Fix? no '..' in /ROOT/d300ue.sanity/[0x200005220:0x506:0x0]:0/d70 (519059) is ... (96155), should be /ROOT/d300ue.sanity/[0x200005220:0x506:0x0]:0 (64261). Fix? no '..' in /ROOT/d300ue.sanity/[0x200005220:0x506:0x0]:0/d71 (519060) is /REMOTE_PARENT_DIR/0x200005222:0x195:0x0 (64262), should be /ROOT/d300ue.sanity/[0x200005220:0x506:0x0]:0 (64261). Fix? no '..' in /ROOT/d300ue.sanity/[0x200005220:0x506:0x0]:0/d73 (519061) is ... (96153), should be /ROOT/d300ue.sanity/[0x200005220:0x506:0x0]:0 (64261). Fix? no '..' in /ROOT/d300ue.sanity/[0x200005220:0x506:0x0]:0/d74 (519062) is ... (96154), should be /ROOT/d300ue.sanity/[0x200005220:0x506:0x0]:0 (64261). Fix? no '..' in /ROOT/d300ue.sanity/[0x200005220:0x506:0x0]:0/d79 (519064) is ... (96153), should be /ROOT/d300ue.sanity/[0x200005220:0x506:0x0]:0 (64261). Fix? no '..' in /ROOT/d300ue.sanity/[0x200005220:0x506:0x0]:0/d83 (519065) is /REMOTE_PARENT_DIR/0x200005222:0x195:0x0 (64262), should be /ROOT/d300ue.sanity/[0x200005220:0x506:0x0]:0 (64261). Fix? no '..' in /ROOT/d300ue.sanity/[0x200005220:0x506:0x0]:0/d84 (519066) is ... (96155), should be /ROOT/d300ue.sanity/[0x200005220:0x506:0x0]:0 (64261). Fix? no '..' in /ROOT/d300ue.sanity/[0x200005220:0x506:0x0]:0/d85 (519067) is ... (96153), should be /ROOT/d300ue.sanity/[0x200005220:0x506:0x0]:0 (64261). Fix? no '..' in /ROOT/d300ue.sanity/[0x200005220:0x506:0x0]:0/d88 (519068) is ... (96154), should be /ROOT/d300ue.sanity/[0x200005220:0x506:0x0]:0 (64261). Fix? no '..' in /ROOT/d300ue.sanity/[0x200005220:0x506:0x0]:0/d90 (519070) is ... (96155), should be /ROOT/d300ue.sanity/[0x200005220:0x506:0x0]:0 (64261). Fix? no '..' in /ROOT/d300ue.sanity/[0x200005220:0x506:0x0]:0/d97 (519072) is ... (96155), should be /ROOT/d300ue.sanity/[0x200005220:0x506:0x0]:0 (64261). Fix? no '..' in /ROOT/d300ue.sanity/[0x200005220:0x506:0x0]:0/d99 (519074) is ... (96154), should be /ROOT/d300ue.sanity/[0x200005220:0x506:0x0]:0 (64261). Fix? no Pass 3: Memory used: 944k/0k (169k/776k), time: 0.04/ 0.01/ 0.03 Pass 3: I/O read: 20MB, write: 0MB, rate: 505.18MB/s Pass 4: Checking reference counts Inode 32172 ref count is 11, should be 6. Fix? no Inode 32258 ref count is 14, should be 8. Fix? no Inode 32259 ref count is 11, should be 6. Fix? no Inode 32260 ref count is 13, should be 11. Fix? no Inode 32261 ref count is 11, should be 5. Fix? no Inode 32262 ref count is 15, should be 7. Fix? no Inode 64257 ref count is 11, should be 7. Fix? no Inode 64258 ref count is 11, should be 7. Fix? no Inode 64259 ref count is 9, should be 4. Fix? no Inode 64260 ref count is 10, should be 8. Fix? no Inode 64261 ref count is 102, should be 63. Fix? no Inode 64262 ref count is 2, should be 14. Fix? no Inode 512004 ref count is 475, should be 494. Fix? no Inode 512111 ref count is 2, should be 3. Fix? no Inode 512116 ref count is 4, should be 3. Fix? no Inode 518360 ref count is 51, should be 79. Fix? no Inode 518944 ref count is 3, should be 4. Fix? no Inode 518945 ref count is 4, should be 3. Fix? no Unattached inode 519127 Connect to /lost+found? no Pass 4: Memory used: 944k/0k (98k/847k), time: 0.03/ 0.03/ 0.00 Pass 4: I/O read: 1MB, write: 0MB, rate: 36.19MB/s Pass 5: Checking group summary infoleg146-server: [QUOTA WARNING] Usage inconsistent for ID 0:actual (21766144, 9105) != expected (21745664, 9106) oleg146-server: [QUOTA WARNING] Usage inconsistent for ID 0:actual (21561344, 9115) != expected (21540864, 9116) oleg146-server: [QUOTA WARNING] Usage inconsistent for ID 0:actual (21815296, 9145) != expected (21798912, 9147) oleg146-server: [QUOTA WARNING] Usage inconsistent for ID 2:actual (16384, 4) != expected (12288, 3) pdsh@oleg146-client: oleg146-server: ssh exited with exit code 4 ormation Pass 5: Memory used: 812k/0k (93k/719k), time: 0.03/ 0.03/ 0.00 Pass 5: I/O read: 1MB, write: 0MB, rate: 32.00MB/s Update quota info for quota type 0? no Update quota info for quota type 1? no Update quota info for quota type 2? no lustre-MDT0000: ********** WARNING: Filesystem still has errors ********** 9160 inodes used (0.89%, out of 1024000) 23 non-contiguous files (0.3%) 31 non-contiguous directories (0.3%) # of inodes with ind/dind/tind blocks: 24/0/0 289052 blocks used (45.16%, out of 640000) 0 bad blocks 5 large files 6536 regular files 2579 directories 0 character device files 0 block device files 0 fifos 4294967295 links 35 symbolic links (35 fast symbolic links) 0 sockets ------------ 9151 files Memory used: 812k/0k (92k/721k), time: 0.30/ 0.19/ 0.13 I/O read: 34MB, write: 0MB, rate: 113.58MB/s Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg146-server: oleg146-server.virtnet: executing set_default_debug all all pdsh@oleg146-client: oleg146-server: ssh exited with exit code 1 Started lustre-MDT0000 Stopping /mnt/lustre-mds2 (opts:) on oleg146-server e2fsck -d -v -t -t -f -n /dev/mapper/mds2_flakey -m8 oleg146-server: e2fsck 1.46.2.wc5 (26-Mar-2022) oleg146-server: Use max possible thread num: 2 instead Pass 1: Checking inodes, blocks, and sizes [Thread 0] Scan group range [0, 16) [Thread 1] Scan group range [16, 32) [Thread 1] jumping to group 16 [Thread 0] jumping to group 0 [Thread 0] e2fsck_pass1_run:2564: increase inode 78 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 79 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 80 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 81 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 82 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 83 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 84 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 85 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 86 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 87 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 88 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 89 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 90 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 91 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 92 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 93 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 94 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 95 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 96 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 97 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 98 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 99 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 100 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 101 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 102 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 103 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 104 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 105 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 106 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 107 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 108 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 109 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 110 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 111 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 112 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 113 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 114 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 115 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 116 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 117 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 118 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 119 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 120 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 121 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 122 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 123 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 124 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 125 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 126 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 127 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 128 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 129 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 130 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 131 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 132 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 133 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 134 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 135 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 136 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 137 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 138 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 139 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 140 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 141 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 142 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 143 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 144 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 145 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 147 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 148 badness 0 to 2 for 10084 [Thread 0] group 1 finished [Thread 0] e2fsck_pass1_run:2564: increase inode 32001 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 32002 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 32003 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 32004 badness 0 to 2 for 10084 [Thread 0] group 2 finished [Thread 0] group 3 finished [Thread 0] group 4 finished [Thread 0] group 5 finished [Thread 1] group 17 finished [Thread 1] group 18 finished [Thread 1] group 19 finished [Thread 1] group 20 finished [Thread 1] group 21 finished [Thread 1] group 22 finished [Thread 1] group 23 finished [Thread 1] group 24 finished [Thread 1] group 25 finished [Thread 1] group 26 finished [Thread 1] group 27 finished [Thread 1] group 28 finished [Thread 1] group 29 finished [Thread 1] group 30 finished [Thread 1] group 31 finished [Thread 1] group 32 finished [Thread 0] group 6 finished [Thread 1] Pass 1: Memory used: 988k/0k (732k/257k), time: 0.07/ 0.06/ 0.09 [Thread 1] Pass 1: I/O read: 24MB, write: 0MB, rate: 320.27MB/s [Thread 1] Scanned group range [16, 32), inodes 20518 [Thread 0] group 7 finished [Thread 0] group 8 finished [Thread 0] group 9 finished [Thread 0] group 10 finished [Thread 0] group 11 finished [Thread 0] group 12 finished [Thread 0] group 13 finished [Thread 0] group 14 finished [Thread 0] group 15 finished [Thread 0] group 16 finished [Thread 0] Pass 1: Memory used: 996k/0k (641k/356k), time: 0.08/ 0.06/ 0.09 [Thread 0] Pass 1: I/O read: 16MB, write: 0MB, rate: 209.00MB/s [Thread 0] Scanned group range [0, 16), inodes 10692 Pass 2: Checking directory structure Pass 2: Memory used: 852k/0k (146k/707k), time: 0.06/ 0.02/ 0.02 Pass 2: I/O read: 11MB, write: 0MB, rate: 198.81MB/s Pass 3: Checking directory connectivity Peak memory: Memory used: 852k/0k (146k/707k), time: 0.19/ 0.13/ 0.11 Unconnected directory inode 512120 (was in /REMOTE_PARENT_DIR/0x240000405:0x31:0x0) Connect to /lost+found? no '..' in ... (512120) is /REMOTE_PARENT_DIR/0x240000405:0x31:0x0 (32006), should be (0). Fix? no '..' in /REMOTE_PARENT_DIR/0x240001b79:0x7:0x0/d0 (514525) is /REMOTE_PARENT_DIR (512004), should be /REMOTE_PARENT_DIR/0x240001b79:0x7:0x0 (514575). Fix? no '..' in /REMOTE_PARENT_DIR/0x240001b79:0x7:0x0/d2 (514526) is /REMOTE_PARENT_DIR (512004), should be /REMOTE_PARENT_DIR/0x240001b79:0x7:0x0 (514575). Fix? no '..' in /REMOTE_PARENT_DIR/0x240001b79:0x7:0x0/d4 (514527) is /REMOTE_PARENT_DIR (512004), should be /REMOTE_PARENT_DIR/0x240001b79:0x7:0x0 (514575). Fix? no '..' in /REMOTE_PARENT_DIR/0x240001b79:0x7:0x0/d6 (514528) is /REMOTE_PARENT_DIR (512004), should be /REMOTE_PARENT_DIR/0x240001b79:0x7:0x0 (514575). Fix? no '..' in /REMOTE_PARENT_DIR/0x240001b79:0x7:0x0/d8 (514529) is /REMOTE_PARENT_DIR (512004), should be /REMOTE_PARENT_DIR/0x240001b79:0x7:0x0 (514575). Fix? no '..' in /REMOTE_PARENT_DIR/0x240001b79:0x7:0x0/d16 (514533) is /REMOTE_PARENT_DIR (512004), should be /REMOTE_PARENT_DIR/0x240001b79:0x7:0x0 (514575). Fix? no '..' in /REMOTE_PARENT_DIR/0x240001b79:0x7:0x0/d20 (514535) is /REMOTE_PARENT_DIR (512004), should be /REMOTE_PARENT_DIR/0x240001b79:0x7:0x0 (514575). Fix? no '..' in /REMOTE_PARENT_DIR/0x240001b79:0x7:0x0/d22 (514536) is /REMOTE_PARENT_DIR (512004), should be /REMOTE_PARENT_DIR/0x240001b79:0x7:0x0 (514575). Fix? no '..' in /REMOTE_PARENT_DIR/0x240001b79:0x7:0x0/d24 (514537) is /REMOTE_PARENT_DIR (512004), should be /REMOTE_PARENT_DIR/0x240001b79:0x7:0x0 (514575). Fix? no '..' in /REMOTE_PARENT_DIR/0x240001b79:0x7:0x0/d26 (514538) is /REMOTE_PARENT_DIR (512004), should be /REMOTE_PARENT_DIR/0x240001b79:0x7:0x0 (514575). Fix? no '..' in /REMOTE_PARENT_DIR/0x240001b79:0x7:0x0/d36 (514543) is /REMOTE_PARENT_DIR (512004), should be /REMOTE_PARENT_DIR/0x240001b79:0x7:0x0 (514575). Fix? no '..' in /REMOTE_PARENT_DIR/0x240001b79:0x7:0x0/d46 (514548) is /REMOTE_PARENT_DIR (512004), should be /REMOTE_PARENT_DIR/0x240001b79:0x7:0x0 (514575). Fix? no '..' in /REMOTE_PARENT_DIR/0x240001b79:0x7:0x0/d58 (514554) is /REMOTE_PARENT_DIR (512004), should be /REMOTE_PARENT_DIR/0x240001b79:0x7:0x0 (514575). Fix? no '..' in /REMOTE_PARENT_DIR/0x240001b79:0x7:0x0/d60 (514555) is /REMOTE_PARENT_DIR (512004), should be /REMOTE_PARENT_DIR/0x240001b79:0x7:0x0 (514575). Fix? no '..' in /REMOTE_PARENT_DIR/0x240001b79:0x7:0x0/d62 (514556) is /REMOTE_PARENT_DIR (512004), should be /REMOTE_PARENT_DIR/0x240001b79:0x7:0x0 (514575). Fix? no '..' in /REMOTE_PARENT_DIR/0x240001b79:0x7:0x0/d66 (514558) is /REMOTE_PARENT_DIR (512004), should be /REMOTE_PARENT_DIR/0x240001b79:0x7:0x0 (514575). Fix? no '..' in /REMOTE_PARENT_DIR/0x240001b79:0x7:0x0/d82 (514566) is /REMOTE_PARENT_DIR (512004), should be /REMOTE_PARENT_DIR/0x240001b79:0x7:0x0 (514575). Fix? no '..' in /REMOTE_PARENT_DIR/0x240001b79:0x7:0x0/d84 (514567) is /REMOTE_PARENT_DIR (512004), should be /REMOTE_PARENT_DIR/0x240001b79:0x7:0x0 (514575). Fix? no '..' in /REMOTE_PARENT_DIR/0x240001b79:0x7:0x0/d86 (514568) is /REMOTE_PARENT_DIR (512004), should be /REMOTE_PARENT_DIR/0x240001b79:0x7:0x0 (514575). Fix? no '..' in /REMOTE_PARENT_DIR/0x240001b79:0x7:0x0/d88 (514569) is /REMOTE_PARENT_DIR (512004), should be /REMOTE_PARENT_DIR/0x240001b79:0x7:0x0 (514575). Fix? no '..' in /REMOTE_PARENT_DIR/0x240001b79:0x7:0x0/d90 (514570) is /REMOTE_PARENT_DIR (512004), should be /REMOTE_PARENT_DIR/0x240001b79:0x7:0x0 (514575). Fix? no '..' in /REMOTE_PARENT_DIR/0x240001b79:0x7:0x0/d92 (514571) is /REMOTE_PARENT_DIR (512004), should be /REMOTE_PARENT_DIR/0x240001b79:0x7:0x0 (514575). Fix? no '..' in /REMOTE_PARENT_DIR/0x240001b79:0x7:0x0/d94 (514572) is /REMOTE_PARENT_DIR (512004), should be /REMOTE_PARENT_DIR/0x240001b79:0x7:0x0 (514575). Fix? no '..' in /REMOTE_PARENT_DIR/0x240001b78:0x5aef:0x0 (514701) is /REMOTE_PARENT_DIR/0x240001b7b:0x12:0x0 (514652), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240001b78:0x5af0:0x0 (514702) is /REMOTE_PARENT_DIR/0x240001b7b:0x12:0x0 (514652), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240001b78:0x5af1:0x0 (514703) is /REMOTE_PARENT_DIR/0x240001b7b:0x12:0x0 (514652), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240001b78:0x5af2:0x0 (514704) is /REMOTE_PARENT_DIR/0x240001b7b:0x12:0x0 (514652), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240001b78:0x5af3:0x0 (514705) is /REMOTE_PARENT_DIR/0x240001b7b:0x12:0x0 (514652), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240001b78:0x5af4:0x0 (514706) is /REMOTE_PARENT_DIR/0x240001b7b:0x12:0x0 (514652), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240001b78:0x5af5:0x0 (514707) is /REMOTE_PARENT_DIR/0x240001b7b:0x12:0x0 (514652), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240001b78:0x5af6:0x0 (514708) is /REMOTE_PARENT_DIR/0x240001b7b:0x12:0x0 (514652), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240001b78:0x5af7:0x0 (514709) is /REMOTE_PARENT_DIR/0x240001b7b:0x12:0x0 (514652), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240001b78:0x5af8:0x0 (514710) is /REMOTE_PARENT_DIR/0x240001b7b:0x12:0x0 (514652), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240001b78:0x5af9:0x0 (514711) is /REMOTE_PARENT_DIR/0x240001b7b:0x12:0x0 (514652), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240001b78:0x5afa:0x0 (514712) is /REMOTE_PARENT_DIR/0x240001b7b:0x12:0x0 (514652), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240001b78:0x5afb:0x0 (514713) is /REMOTE_PARENT_DIR/0x240001b7b:0x12:0x0 (514652), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240001b78:0x5afc:0x0 (514714) is /REMOTE_PARENT_DIR/0x240001b7b:0x12:0x0 (514652), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240001b78:0x5afd:0x0 (514715) is /REMOTE_PARENT_DIR/0x240001b7b:0x12:0x0 (514652), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240001b78:0x5afe:0x0 (514716) is /REMOTE_PARENT_DIR/0x240001b7b:0x12:0x0 (514652), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240001b78:0x5aff:0x0 (514717) is /REMOTE_PARENT_DIR/0x240001b7b:0x12:0x0 (514652), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240001b78:0x5b00:0x0 (514718) is /REMOTE_PARENT_DIR/0x240001b7b:0x12:0x0 (514652), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240001b78:0x5b01:0x0 (514719) is /REMOTE_PARENT_DIR/0x240001b7b:0x12:0x0 (514652), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240001b78:0x5b02:0x0 (514720) is /REMOTE_PARENT_DIR/0x240001b7b:0x12:0x0 (514652), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240001b78:0x5b03:0x0 (514721) is /REMOTE_PARENT_DIR/0x240001b7b:0x12:0x0 (514652), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240001b78:0x5b04:0x0 (514722) is /REMOTE_PARENT_DIR/0x240001b7b:0x12:0x0 (514652), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240001b78:0x5b05:0x0 (514723) is /REMOTE_PARENT_DIR/0x240001b7b:0x12:0x0 (514652), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240001b78:0x5b06:0x0 (514724) is /REMOTE_PARENT_DIR/0x240001b7b:0x12:0x0 (514652), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240001b78:0x5b07:0x0 (514725) is /REMOTE_PARENT_DIR/0x240001b7b:0x12:0x0 (514652), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240001b78:0x5b08:0x0 (514726) is /REMOTE_PARENT_DIR/0x240001b7b:0x12:0x0 (514652), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240001b78:0x5b09:0x0 (514727) is /REMOTE_PARENT_DIR/0x240001b7b:0x12:0x0 (514652), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240001b78:0x5b0a:0x0 (514728) is /REMOTE_PARENT_DIR/0x240001b7b:0x12:0x0 (514652), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240001b78:0x5b0b:0x0 (514729) is /REMOTE_PARENT_DIR/0x240001b7b:0x12:0x0 (514652), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240001b78:0x5b0c:0x0 (514730) is /REMOTE_PARENT_DIR/0x240001b7b:0x12:0x0 (514652), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240001b78:0x5b0d:0x0 (514731) is /REMOTE_PARENT_DIR/0x240001b7b:0x12:0x0 (514652), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240001b78:0x5b0e:0x0 (514732) is /REMOTE_PARENT_DIR/0x240001b7b:0x12:0x0 (514652), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240001b78:0x5b0f:0x0 (514733) is /REMOTE_PARENT_DIR/0x240001b7b:0x12:0x0 (514652), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240001b78:0x5b10:0x0 (514734) is /REMOTE_PARENT_DIR/0x240001b7b:0x12:0x0 (514652), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240001b78:0x5b11:0x0 (514735) is /REMOTE_PARENT_DIR/0x240001b7b:0x12:0x0 (514652), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240001b78:0x5b12:0x0 (514736) is /REMOTE_PARENT_DIR/0x240001b7b:0x12:0x0 (514652), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240001b78:0x5b13:0x0 (514737) is /REMOTE_PARENT_DIR/0x240001b7b:0x12:0x0 (514652), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240001b78:0x5b14:0x0 (514738) is /REMOTE_PARENT_DIR/0x240001b7b:0x12:0x0 (514652), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240001b78:0x5b15:0x0 (514739) is /REMOTE_PARENT_DIR/0x240001b7b:0x12:0x0 (514652), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240001b78:0x5b16:0x0 (514740) is /REMOTE_PARENT_DIR/0x240001b7b:0x12:0x0 (514652), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240001b78:0x5b17:0x0 (514741) is /REMOTE_PARENT_DIR/0x240001b7b:0x12:0x0 (514652), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240001b78:0x5b18:0x0 (514742) is /REMOTE_PARENT_DIR/0x240001b7b:0x12:0x0 (514652), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240001b78:0x5b19:0x0 (514743) is /REMOTE_PARENT_DIR/0x240001b7b:0x12:0x0 (514652), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240001b78:0x5b1a:0x0 (514744) is /REMOTE_PARENT_DIR/0x240001b7b:0x12:0x0 (514652), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240001b78:0x5b1b:0x0 (514745) is /REMOTE_PARENT_DIR/0x240001b7b:0x12:0x0 (514652), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240001b78:0x5b1c:0x0 (514746) is /REMOTE_PARENT_DIR/0x240001b7b:0x12:0x0 (514652), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240001b78:0x5b1d:0x0 (514747) is /REMOTE_PARENT_DIR/0x240001b7b:0x12:0x0 (514652), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240001b78:0x5b1e:0x0 (514748) is /REMOTE_PARENT_DIR/0x240001b7b:0x12:0x0 (514652), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240001b78:0x5b1f:0x0 (514749) is /REMOTE_PARENT_DIR/0x240001b7b:0x12:0x0 (514652), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240001b78:0x5b20:0x0 (514750) is /REMOTE_PARENT_DIR/0x240001b7b:0x12:0x0 (514652), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240001b78:0x5b21:0x0 (514751) is /REMOTE_PARENT_DIR/0x240001b7b:0x12:0x0 (514652), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240001b78:0x5bcf:0x0 (514771) is /REMOTE_PARENT_DIR/0x240001b78:0x5c0c:0x0/[0x240001b71:0x74:0x0]:0/sub49 (514753), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240001b78:0x5bd0:0x0 (514773) is /REMOTE_PARENT_DIR/0x240001b78:0x5c0c:0x0/[0x240001b71:0x74:0x0]:0/sub49 (514753), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240001b78:0x5bd1:0x0 (514775) is /REMOTE_PARENT_DIR/0x240001b78:0x5c0c:0x0/[0x240001b71:0x74:0x0]:0/sub49 (514753), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240001b78:0x5bd2:0x0 (514777) is /REMOTE_PARENT_DIR/0x240001b78:0x5c0c:0x0/[0x240001b71:0x74:0x0]:0/sub49 (514753), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240001b78:0x5bd4:0x0/[0x240001b71:0x72:0x0]:0/d9 (514779) is /REMOTE_PARENT_DIR/0x240001b78:0x5c0c:0x0/[0x240001b71:0x74:0x0]:0/sub49 (514753), should be /REMOTE_PARENT_DIR/0x240001b78:0x5bd4:0x0/[0x240001b71:0x72:0x0]:0 (514781). Fix? no '..' in /REMOTE_PARENT_DIR/0x240002b13:0x7f:0x0 (515429) is /REMOTE_PARENT_DIR/0x240002b13:0xb1:0x0/[0x240002b13:0x1a:0x0]:0 (515327), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240002b13:0x80:0x0 (515431) is /REMOTE_PARENT_DIR/0x240002b13:0xb1:0x0/[0x240002b13:0x1a:0x0]:0 (515327), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240002b13:0x81:0x0 (515433) is /REMOTE_PARENT_DIR/0x240002b13:0xb1:0x0/[0x240002b13:0x1a:0x0]:0 (515327), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240002b13:0x82:0x0 (515435) is /REMOTE_PARENT_DIR/0x240002b13:0xb1:0x0/[0x240002b13:0x1a:0x0]:0 (515327), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240002b13:0x84:0x0 (515439) is /REMOTE_PARENT_DIR/0x240002b13:0xb1:0x0/[0x240002b13:0x1a:0x0]:0 (515327), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240002b13:0x85:0x0 (515441) is /REMOTE_PARENT_DIR/0x240002b13:0xb1:0x0/[0x240002b13:0x1a:0x0]:0 (515327), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240002b13:0x86:0x0 (515443) is /REMOTE_PARENT_DIR/0x240002b13:0xb1:0x0/[0x240002b13:0x1a:0x0]:0 (515327), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240002b13:0x87:0x0 (515445) is /REMOTE_PARENT_DIR/0x240002b13:0xb1:0x0/[0x240002b13:0x1a:0x0]:0 (515327), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240002b13:0xf1:0x0 (515446) is /REMOTE_PARENT_DIR/0x240002b13:0x12d:0x0 (32042), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240002b13:0x88:0x0 (515447) is /REMOTE_PARENT_DIR/0x240002b13:0xb1:0x0/[0x240002b13:0x1a:0x0]:0 (515327), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240002b13:0xf2:0x0 (515448) is /REMOTE_PARENT_DIR/0x240002b13:0x1e85:0x0 (32041), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240002b13:0x89:0x0 (515449) is /REMOTE_PARENT_DIR/0x240002b13:0xb1:0x0/[0x240002b13:0x1a:0x0]:0 (515327), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240002b13:0xf3:0x0 (515450) is /REMOTE_PARENT_DIR/0x240002b13:0x131:0x0 (32044), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240002b13:0x8a:0x0 (515451) is /REMOTE_PARENT_DIR/0x240002b13:0xb1:0x0/[0x240002b13:0x1a:0x0]:0 (515327), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240002b13:0xf4:0x0 (515452) is /REMOTE_PARENT_DIR/0x240002b13:0x12d:0x0 (32042), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240002b13:0x8b:0x0 (515453) is /REMOTE_PARENT_DIR/0x240002b13:0xb1:0x0/[0x240002b13:0x1a:0x0]:0 (515327), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240002b13:0xf5:0x0 (515454) is /REMOTE_PARENT_DIR/0x240002b13:0x1e85:0x0 (32041), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240002b13:0x8c:0x0 (515455) is /REMOTE_PARENT_DIR/0x240002b13:0xb1:0x0/[0x240002b13:0x1a:0x0]:0 (515327), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240002b13:0xf6:0x0 (515456) is /REMOTE_PARENT_DIR/0x240002b13:0x12d:0x0 (32042), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240002b13:0x8d:0x0 (515457) is /REMOTE_PARENT_DIR/0x240002b13:0xb1:0x0/[0x240002b13:0x1a:0x0]:0 (515327), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240002b13:0x8e:0x0 (515459) is /REMOTE_PARENT_DIR/0x240002b13:0xb1:0x0/[0x240002b13:0x1a:0x0]:0 (515327), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240002b13:0xf7:0x0 (515460) is /REMOTE_PARENT_DIR/0x240002b13:0x134:0x0 (32045), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240002b13:0x8f:0x0 (515461) is /REMOTE_PARENT_DIR/0x240002b13:0xb1:0x0/[0x240002b13:0x1a:0x0]:0 (515327), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240002b13:0x90:0x0 (515463) is /REMOTE_PARENT_DIR/0x240002b13:0xb1:0x0/[0x240002b13:0x1a:0x0]:0 (515327), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240002b13:0xf8:0x0 (515464) is /REMOTE_PARENT_DIR/0x240002b13:0x1e85:0x0 (32041), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240002b13:0x91:0x0 (515465) is /REMOTE_PARENT_DIR/0x240002b13:0xb1:0x0/[0x240002b13:0x1a:0x0]:0 (515327), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240002b13:0xf9:0x0 (515466) is /REMOTE_PARENT_DIR/0x240002b13:0x12e:0x0 (32043), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240002b13:0x92:0x0 (515467) is /REMOTE_PARENT_DIR/0x240002b13:0xb1:0x0/[0x240002b13:0x1a:0x0]:0 (515327), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240002b13:0xfa:0x0 (515468) is /REMOTE_PARENT_DIR/0x240002b13:0x12d:0x0 (32042), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240002b13:0x93:0x0 (515469) is /REMOTE_PARENT_DIR/0x240002b13:0xb1:0x0/[0x240002b13:0x1a:0x0]:0 (515327), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240002b13:0xfb:0x0 (515470) is /REMOTE_PARENT_DIR/0x240002b13:0x131:0x0 (32044), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240002b13:0x94:0x0 (515471) is /REMOTE_PARENT_DIR/0x240002b13:0xb1:0x0/[0x240002b13:0x1a:0x0]:0 (515327), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240002b13:0xfc:0x0 (515472) is /REMOTE_PARENT_DIR/0x240002b13:0x12d:0x0 (32042), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240002b13:0x95:0x0 (515473) is /REMOTE_PARENT_DIR/0x240002b13:0xb1:0x0/[0x240002b13:0x1a:0x0]:0 (515327), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240002b13:0xfd:0x0 (515474) is /REMOTE_PARENT_DIR/0x240002b13:0x134:0x0 (32045), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240002b13:0x96:0x0 (515475) is /REMOTE_PARENT_DIR/0x240002b13:0xb1:0x0/[0x240002b13:0x1a:0x0]:0 (515327), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240002b13:0xfe:0x0 (515476) is /REMOTE_PARENT_DIR/0x240002b13:0x1e85:0x0 (32041), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240002b13:0x97:0x0 (515477) is /REMOTE_PARENT_DIR/0x240002b13:0xb1:0x0/[0x240002b13:0x1a:0x0]:0 (515327), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240002b13:0xff:0x0 (515478) is /REMOTE_PARENT_DIR/0x240002b13:0x134:0x0 (32045), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240002b13:0x98:0x0 (515479) is /REMOTE_PARENT_DIR/0x240002b13:0xb1:0x0/[0x240002b13:0x1a:0x0]:0 (515327), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240002b13:0x99:0x0 (515481) is /REMOTE_PARENT_DIR/0x240002b13:0xb1:0x0/[0x240002b13:0x1a:0x0]:0 (515327), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240002b13:0x100:0x0 (515482) is /REMOTE_PARENT_DIR/0x240002b13:0x12e:0x0 (32043), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240002b13:0x9a:0x0 (515483) is /REMOTE_PARENT_DIR/0x240002b13:0xb1:0x0/[0x240002b13:0x1a:0x0]:0 (515327), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240002b13:0x9b:0x0 (515485) is /REMOTE_PARENT_DIR/0x240002b13:0xb1:0x0/[0x240002b13:0x1a:0x0]:0 (515327), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240002b13:0x101:0x0 (515486) is /REMOTE_PARENT_DIR/0x240002b13:0x12e:0x0 (32043), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240002b13:0x9c:0x0 (515487) is /REMOTE_PARENT_DIR/0x240002b13:0xb1:0x0/[0x240002b13:0x1a:0x0]:0 (515327), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240002b13:0x102:0x0 (515488) is /REMOTE_PARENT_DIR/0x240002b13:0x131:0x0 (32044), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240002b13:0x9d:0x0 (515489) is /REMOTE_PARENT_DIR/0x240002b13:0xb1:0x0/[0x240002b13:0x1a:0x0]:0 (515327), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240002b13:0x103:0x0 (515490) is /REMOTE_PARENT_DIR/0x240002b13:0x131:0x0 (32044), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240002b13:0x104:0x0 (515492) is /REMOTE_PARENT_DIR/0x240002b13:0x131:0x0 (32044), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240002b13:0x105:0x0 (515494) is /REMOTE_PARENT_DIR/0x240002b13:0x12d:0x0 (32042), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240002b13:0xa0:0x0 (515495) is /REMOTE_PARENT_DIR/0x240002b13:0xb1:0x0/[0x240002b13:0x1a:0x0]:0 (515327), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240002b13:0xa1:0x0 (515497) is /REMOTE_PARENT_DIR/0x240002b13:0xb1:0x0/[0x240002b13:0x1a:0x0]:0 (515327), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240002b13:0x106:0x0 (515498) is /REMOTE_PARENT_DIR/0x240002b13:0x1e85:0x0 (32041), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240002b13:0xa2:0x0 (515499) is /REMOTE_PARENT_DIR/0x240002b13:0xb1:0x0/[0x240002b13:0x1a:0x0]:0 (515327), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240002b13:0x107:0x0 (515500) is /REMOTE_PARENT_DIR/0x240002b13:0x12d:0x0 (32042), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240002b13:0xa3:0x0 (515501) is /REMOTE_PARENT_DIR/0x240002b13:0xb1:0x0/[0x240002b13:0x1a:0x0]:0 (515327), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240002b13:0x108:0x0 (515502) is /REMOTE_PARENT_DIR/0x240002b13:0x134:0x0 (32045), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240002b13:0xa4:0x0 (515503) is /REMOTE_PARENT_DIR/0x240002b13:0xb1:0x0/[0x240002b13:0x1a:0x0]:0 (515327), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240002b13:0xa5:0x0 (515505) is /REMOTE_PARENT_DIR/0x240002b13:0xb1:0x0/[0x240002b13:0x1a:0x0]:0 (515327), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240002b13:0x109:0x0 (515506) is /REMOTE_PARENT_DIR/0x240002b13:0x131:0x0 (32044), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240002b13:0xa6:0x0 (515507) is /REMOTE_PARENT_DIR/0x240002b13:0xb1:0x0/[0x240002b13:0x1a:0x0]:0 (515327), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240002b13:0x10a:0x0 (515508) is /REMOTE_PARENT_DIR/0x240002b13:0x12e:0x0 (32043), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240002b13:0xa7:0x0 (515509) is /REMOTE_PARENT_DIR/0x240002b13:0xb1:0x0/[0x240002b13:0x1a:0x0]:0 (515327), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240002b13:0x10b:0x0 (515510) is /REMOTE_PARENT_DIR/0x240002b13:0x134:0x0 (32045), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240002b13:0xa8:0x0 (515511) is /REMOTE_PARENT_DIR/0x240002b13:0xb1:0x0/[0x240002b13:0x1a:0x0]:0 (515327), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240002b13:0x10c:0x0 (515512) is /REMOTE_PARENT_DIR/0x240002b13:0x1e85:0x0 (32041), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240002b13:0xa9:0x0 (515513) is /REMOTE_PARENT_DIR/0x240002b13:0xb1:0x0/[0x240002b13:0x1a:0x0]:0 (515327), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240002b13:0x10d:0x0 (515514) is /REMOTE_PARENT_DIR/0x240002b13:0x131:0x0 (32044), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240002b13:0xaa:0x0 (515515) is /REMOTE_PARENT_DIR/0x240002b13:0xb1:0x0/[0x240002b13:0x1a:0x0]:0 (515327), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240002b13:0x10e:0x0 (515516) is /REMOTE_PARENT_DIR/0x240002b13:0x131:0x0 (32044), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240002b13:0x10f:0x0 (515518) is /REMOTE_PARENT_DIR/0x240002b13:0x12d:0x0 (32042), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240002b13:0x110:0x0 (515520) is /REMOTE_PARENT_DIR/0x240002b13:0x134:0x0 (32045), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240002b13:0xad:0x0 (515521) is /REMOTE_PARENT_DIR/0x240002b13:0xb1:0x0/[0x240002b13:0x1a:0x0]:0 (515327), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240002b13:0x111:0x0 (515522) is /REMOTE_PARENT_DIR/0x240002b13:0x131:0x0 (32044), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240002b13:0xae:0x0 (515523) is /REMOTE_PARENT_DIR/0x240002b13:0xb1:0x0/[0x240002b13:0x1a:0x0]:0 (515327), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240002b13:0x112:0x0 (515524) is /REMOTE_PARENT_DIR/0x240002b13:0x134:0x0 (32045), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240002b13:0xaf:0x0 (515525) is /REMOTE_PARENT_DIR/0x240002b13:0xb1:0x0/[0x240002b13:0x1a:0x0]:0 (515327), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240002b13:0xb0:0x0 (515527) is /REMOTE_PARENT_DIR/0x240002b13:0xb1:0x0/[0x240002b13:0x1a:0x0]:0 (515327), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240002b13:0x113:0x0 (515529) is /REMOTE_PARENT_DIR/0x240002b13:0x134:0x0 (32045), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240002b13:0x114:0x0 (515530) is /REMOTE_PARENT_DIR/0x240002b13:0x1e85:0x0 (32041), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240002b13:0x115:0x0 (515531) is /REMOTE_PARENT_DIR/0x240002b13:0x12e:0x0 (32043), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240002b13:0x116:0x0 (515532) is /REMOTE_PARENT_DIR/0x240002b13:0x12e:0x0 (32043), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240002b13:0x117:0x0 (515533) is /REMOTE_PARENT_DIR/0x240002b13:0x12e:0x0 (32043), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240002b13:0x118:0x0 (515534) is /REMOTE_PARENT_DIR/0x240002b13:0x131:0x0 (32044), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240002b13:0x119:0x0 (515535) is /REMOTE_PARENT_DIR/0x240002b13:0x131:0x0 (32044), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240002b13:0x11a:0x0 (515536) is /REMOTE_PARENT_DIR/0x240002b13:0x1e85:0x0 (32041), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240002b13:0x11b:0x0 (515537) is /REMOTE_PARENT_DIR/0x240002b13:0x12e:0x0 (32043), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240002b13:0x11c:0x0 (515538) is /REMOTE_PARENT_DIR/0x240002b13:0x12e:0x0 (32043), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240002b13:0x11d:0x0 (515539) is /REMOTE_PARENT_DIR/0x240002b13:0x12d:0x0 (32042), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240002b13:0x11e:0x0 (515540) is /REMOTE_PARENT_DIR/0x240002b13:0x131:0x0 (32044), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240002b13:0x11f:0x0 (515541) is /REMOTE_PARENT_DIR/0x240002b13:0x1e85:0x0 (32041), should be /REMOTE_PARENT_DIR (512004). Fix? no '..' in /REMOTE_PARENT_DIR/0x240002b13:0x120:0x0 (515542) is /REMOTE_PARENT_DIR/0x240002b13:0x131:0x0 (32044), should be /REMOTE_PARENT_DIR (512004). Fix? no Pass 3: Memory used: 852k/0k (122k/731k), time: 0.08/ 0.03/ 0.05 Pass 3: I/O read: 27MB, write: 0MB, rate: 343.18MB/s Pass 4: Checking reference counts Inode 32006 ref count is 4, should be 5. Fix? no Inode 32041 ref count is 2, should be 11. Fix? no Inode 32042 ref count is 2, should be 11. Fix? no Inode 32043 ref count is 4, should be 13. Fix? no Inode 32044 ref count is 4, shooleg146-server: [QUOTA WARNING] Usage inconsistent for ID 0:actual (17367040, 7509) != expected (17297408, 7509) oleg146-server: [QUOTA WARNING] Usage inconsistent for ID 0:actual (17412096, 7522) != expected (17342464, 7522) oleg146-server: [QUOTA WARNING] Usage inconsistent for ID 0:actual (17420288, 7539) != expected (17358848, 7541) oleg146-server: [QUOTA WARNING] Usage inconsistent for ID 1:actual (12288, 3) != expected (8192, 2) oleg146-server: [QUOTA WARNING] Usage inconsistent for ID 2:actual (16384, 4) != expected (12288, 3) pdsh@oleg146-client: oleg146-server: ssh exited with exit code 4 uld be 17. Fix? no Inode 32045 ref count is 2, should be 10. Fix? no Inode 512004 ref count is 817, should be 692. Fix? no Inode 512120 ref count is 4, should be 3. Fix? no Inode 514575 ref count is 53, should be 30. Fix? no Inode 514652 ref count is 3, should be 54. Fix? no Inode 514753 ref count is 4, should be 9. Fix? no Inode 514781 ref count is 3, should be 2. Fix? no Inode 515327 ref count is 15, should be 60. Fix? no Pass 4: Memory used: 852k/0k (81k/772k), time: 0.03/ 0.03/ 0.00 Pass 4: I/O read: 1MB, write: 0MB, rate: 34.11MB/s Pass 5: Checking group summary information Pass 5: Memory used: 772k/0k (77k/696k), time: 0.01/ 0.01/ 0.00 Pass 5: I/O read: 1MB, write: 0MB, rate: 94.69MB/s Update quota info for quota type 0? no Update quota info for quota type 1? no Update quota info for quota type 2? no lustre-MDT0001: ********** WARNING: Filesystem still has errors ********** 7556 inodes used (0.74%, out of 1024000) 16 non-contiguous files (0.2%) 15 non-contiguous directories (0.2%) # of inodes with ind/dind/tind blocks: 25/0/0 287981 blocks used (45.00%, out of 640000) 0 bad blocks 1 large file 5533 regular files 1992 directories 4 character device files 2 block device files 2 fifos 2 links 11 symbolic links (11 fast symbolic links) 2 sockets ------------ 7548 files Memory used: 772k/0k (75k/698k), time: 0.31/ 0.20/ 0.16 I/O read: 38MB, write: 0MB, rate: 121.17MB/s Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg146-server: oleg146-server.virtnet: executing set_default_debug all all pdsh@oleg146-client: oleg146-server: ssh exited with exit code 1 Started lustre-MDT0001 PASS 804 (27s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 805: ZFS can remove from full fs ========== 00:12:46 (1713413566) SKIP: sanity test_805 ZFS specific test SKIP 805 (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 806: Verify Lazy Size on MDS ============== 00:12:50 (1713413570) Test SOM for single-threaded write 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.039279 s, 26.7 MB/s Test SOM for single client multi-threaded(32) write Test SOM for multi-client (1) writes Verify SOM block count Test SOM for truncate PASS 806 (9s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 807: verify LSOM syncing tool ============= 00:13:02 (1713413582) mdd.lustre-MDT0000.changelog_mask=+hsm mdd.lustre-MDT0001.changelog_mask=+hsm Registered 2 changelog users: 'cl1 cl1' Test SOM for single-threaded write with fsync 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0585145 s, 17.9 MB/s Test SOM for multi-client (1) writes oleg146-client.virtnet: executing cancel_lru_locks osc Start to sync 3 records. lustre-MDT0000: clear the changelog for cl1 of all records lustre-MDT0000: Deregistered changelog user #1 lustre-MDT0001: clear the changelog for cl1 of all records lustre-MDT0001: Deregistered changelog user #1 lustre-MDT0001: changelog user 'cl1' not found lustre-MDT0000: changelog user 'cl1' not found PASS 807 (13s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 808: Check trusted.som xattr not logged in Changelogs ========================================================== 00:13:18 (1713413598) mdd.lustre-MDT0000.changelog_mask=+hsm mdd.lustre-MDT0001.changelog_mask=+hsm Registered 2 changelog users: 'cl2 cl2' 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0468633 s, 22.4 MB/s lustre-MDT0000: clear the changelog for cl2 of all records lustre-MDT0000: Deregistered changelog user #2 lustre-MDT0001: clear the changelog for cl2 of all records lustre-MDT0001: Deregistered changelog user #2 lustre-MDT0001: changelog user 'cl2' not found lustre-MDT0000: changelog user 'cl2' not found PASS 808 (7s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 809: Verify no SOM xattr store for DoM-only files ========================================================== 00:13:27 (1713413607) /mnt/lustre/f809.sanity failed to get som xattr: No data available (61) 1+0 records in 1+0 records out 2048 bytes (2.0 kB) copied, 0.00304923 s, 672 kB/s /mnt/lustre/f809.sanity failed to get som xattr: No data available (61) /mnt/lustre/f809.sanity failed to get som xattr: No data available (61) /mnt/lustre/ failed to get som xattr: No data available (61) PASS 809 (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 810: partial page writes on ZFS (LU-11663) ========================================================== 00:13:31 (1713413611) osc.lustre-OST0000-osc-ffff88012a451000.checksum_type=crc32 osc.lustre-OST0001-osc-ffff88012a451000.checksum_type=crc32 fail_loc=0x411 2+0 records in 2+0 records out 20480 bytes (20 kB) copied, 0.0292103 s, 701 kB/s ldlm.namespaces.lustre-OST0000-osc-ffff88012a451000.lru_size=clear ldlm.namespaces.lustre-OST0001-osc-ffff88012a451000.lru_size=clear 2+0 records in 2+0 records out 20000 bytes (20 kB) copied, 0.0152522 s, 1.3 MB/s ldlm.namespaces.lustre-OST0000-osc-ffff88012a451000.lru_size=clear ldlm.namespaces.lustre-OST0001-osc-ffff88012a451000.lru_size=clear 2+0 records in 2+0 records out 8000 bytes (8.0 kB) copied, 0.0200883 s, 398 kB/s ldlm.namespaces.lustre-OST0000-osc-ffff88012a451000.lru_size=clear ldlm.namespaces.lustre-OST0001-osc-ffff88012a451000.lru_size=clear 2+0 records in 2+0 records out 1000 bytes (1.0 kB) copied, 0.0144896 s, 69.0 kB/s ldlm.namespaces.lustre-OST0000-osc-ffff88012a451000.lru_size=clear ldlm.namespaces.lustre-OST0001-osc-ffff88012a451000.lru_size=clear osc.lustre-OST0000-osc-ffff88012a451000.checksum_type=adler osc.lustre-OST0001-osc-ffff88012a451000.checksum_type=adler fail_loc=0x411 2+0 records in 2+0 records out 20480 bytes (20 kB) copied, 0.015037 s, 1.4 MB/s ldlm.namespaces.lustre-OST0000-osc-ffff88012a451000.lru_size=clear ldlm.namespaces.lustre-OST0001-osc-ffff88012a451000.lru_size=clear 2+0 records in 2+0 records out 20000 bytes (20 kB) copied, 0.0198949 s, 1.0 MB/s ldlm.namespaces.lustre-OST0000-osc-ffff88012a451000.lru_size=clear ldlm.namespaces.lustre-OST0001-osc-ffff88012a451000.lru_size=clear 2+0 records in 2+0 records out 8000 bytes (8.0 kB) copied, 0.0208695 s, 383 kB/s ldlm.namespaces.lustre-OST0000-osc-ffff88012a451000.lru_size=clear ldlm.namespaces.lustre-OST0001-osc-ffff88012a451000.lru_size=clear 2+0 records in 2+0 records out 1000 bytes (1.0 kB) copied, 0.0163155 s, 61.3 kB/s ldlm.namespaces.lustre-OST0000-osc-ffff88012a451000.lru_size=clear ldlm.namespaces.lustre-OST0001-osc-ffff88012a451000.lru_size=clear osc.lustre-OST0000-osc-ffff88012a451000.checksum_type=crc32c osc.lustre-OST0001-osc-ffff88012a451000.checksum_type=crc32c fail_loc=0x411 2+0 records in 2+0 records out 20480 bytes (20 kB) copied, 0.0143566 s, 1.4 MB/s ldlm.namespaces.lustre-OST0000-osc-ffff88012a451000.lru_size=clear ldlm.namespaces.lustre-OST0001-osc-ffff88012a451000.lru_size=clear 2+0 records in 2+0 records out 20000 bytes (20 kB) copied, 0.0158207 s, 1.3 MB/s ldlm.namespaces.lustre-OST0000-osc-ffff88012a451000.lru_size=clear ldlm.namespaces.lustre-OST0001-osc-ffff88012a451000.lru_size=clear 2+0 records in 2+0 records out 8000 bytes (8.0 kB) copied, 0.0193932 s, 413 kB/s ldlm.namespaces.lustre-OST0000-osc-ffff88012a451000.lru_size=clear ldlm.namespaces.lustre-OST0001-osc-ffff88012a451000.lru_size=clear 2+0 records in 2+0 records out 1000 bytes (1.0 kB) copied, 0.0173931 s, 57.5 kB/s ldlm.namespaces.lustre-OST0000-osc-ffff88012a451000.lru_size=clear ldlm.namespaces.lustre-OST0001-osc-ffff88012a451000.lru_size=clear osc.lustre-OST0000-osc-ffff88012a451000.checksum_type=t10ip512 osc.lustre-OST0001-osc-ffff88012a451000.checksum_type=t10ip512 fail_loc=0x411 2+0 records in 2+0 records out 20480 bytes (20 kB) copied, 0.0178648 s, 1.1 MB/s ldlm.namespaces.lustre-OST0000-osc-ffff88012a451000.lru_size=clear ldlm.namespaces.lustre-OST0001-osc-ffff88012a451000.lru_size=clear 2+0 records in 2+0 records out 20000 bytes (20 kB) copied, 0.0146981 s, 1.4 MB/s ldlm.namespaces.lustre-OST0000-osc-ffff88012a451000.lru_size=clear ldlm.namespaces.lustre-OST0001-osc-ffff88012a451000.lru_size=clear 2+0 records in 2+0 records out 8000 bytes (8.0 kB) copied, 0.0221379 s, 361 kB/s ldlm.namespaces.lustre-OST0000-osc-ffff88012a451000.lru_size=clear ldlm.namespaces.lustre-OST0001-osc-ffff88012a451000.lru_size=clear 2+0 records in 2+0 records out 1000 bytes (1.0 kB) copied, 0.0169944 s, 58.8 kB/s ldlm.namespaces.lustre-OST0000-osc-ffff88012a451000.lru_size=clear ldlm.namespaces.lustre-OST0001-osc-ffff88012a451000.lru_size=clear osc.lustre-OST0000-osc-ffff88012a451000.checksum_type=t10ip4K osc.lustre-OST0001-osc-ffff88012a451000.checksum_type=t10ip4K fail_loc=0x411 2+0 records in 2+0 records out 20480 bytes (20 kB) copied, 0.0201333 s, 1.0 MB/s ldlm.namespaces.lustre-OST0000-osc-ffff88012a451000.lru_size=clear ldlm.namespaces.lustre-OST0001-osc-ffff88012a451000.lru_size=clear 2+0 records in 2+0 records out 20000 bytes (20 kB) copied, 0.0166298 s, 1.2 MB/s ldlm.namespaces.lustre-OST0000-osc-ffff88012a451000.lru_size=clear ldlm.namespaces.lustre-OST0001-osc-ffff88012a451000.lru_size=clear 2+0 records in 2+0 records out 8000 bytes (8.0 kB) copied, 0.0151847 s, 527 kB/s ldlm.namespaces.lustre-OST0000-osc-ffff88012a451000.lru_size=clear ldlm.namespaces.lustre-OST0001-osc-ffff88012a451000.lru_size=clear 2+0 records in 2+0 records out 1000 bytes (1.0 kB) copied, 0.0142276 s, 70.3 kB/s ldlm.namespaces.lustre-OST0000-osc-ffff88012a451000.lru_size=clear ldlm.namespaces.lustre-OST0001-osc-ffff88012a451000.lru_size=clear osc.lustre-OST0000-osc-ffff88012a451000.checksum_type=t10crc512 osc.lustre-OST0001-osc-ffff88012a451000.checksum_type=t10crc512 fail_loc=0x411 2+0 records in 2+0 records out 20480 bytes (20 kB) copied, 0.0159852 s, 1.3 MB/s ldlm.namespaces.lustre-OST0000-osc-ffff88012a451000.lru_size=clear ldlm.namespaces.lustre-OST0001-osc-ffff88012a451000.lru_size=clear 2+0 records in 2+0 records out 20000 bytes (20 kB) copied, 0.0138192 s, 1.4 MB/s ldlm.namespaces.lustre-OST0000-osc-ffff88012a451000.lru_size=clear ldlm.namespaces.lustre-OST0001-osc-ffff88012a451000.lru_size=clear 2+0 records in 2+0 records out 8000 bytes (8.0 kB) copied, 0.0207763 s, 385 kB/s ldlm.namespaces.lustre-OST0000-osc-ffff88012a451000.lru_size=clear ldlm.namespaces.lustre-OST0001-osc-ffff88012a451000.lru_size=clear 2+0 records in 2+0 records out 1000 bytes (1.0 kB) copied, 0.00825346 s, 121 kB/s ldlm.namespaces.lustre-OST0000-osc-ffff88012a451000.lru_size=clear ldlm.namespaces.lustre-OST0001-osc-ffff88012a451000.lru_size=clear osc.lustre-OST0000-osc-ffff88012a451000.checksum_type=t10crc4K osc.lustre-OST0001-osc-ffff88012a451000.checksum_type=t10crc4K fail_loc=0x411 2+0 records in 2+0 records out 20480 bytes (20 kB) copied, 0.0130754 s, 1.6 MB/s ldlm.namespaces.lustre-OST0000-osc-ffff88012a451000.lru_size=clear ldlm.namespaces.lustre-OST0001-osc-ffff88012a451000.lru_size=clear 2+0 records in 2+0 records out 20000 bytes (20 kB) copied, 0.0153841 s, 1.3 MB/s ldlm.namespaces.lustre-OST0000-osc-ffff88012a451000.lru_size=clear ldlm.namespaces.lustre-OST0001-osc-ffff88012a451000.lru_size=clear 2+0 records in 2+0 records out 8000 bytes (8.0 kB) copied, 0.0219513 s, 364 kB/s ldlm.namespaces.lustre-OST0000-osc-ffff88012a451000.lru_size=clear ldlm.namespaces.lustre-OST0001-osc-ffff88012a451000.lru_size=clear 2+0 records in 2+0 records out 1000 bytes (1.0 kB) copied, 0.0166052 s, 60.2 kB/s ldlm.namespaces.lustre-OST0000-osc-ffff88012a451000.lru_size=clear ldlm.namespaces.lustre-OST0001-osc-ffff88012a451000.lru_size=clear set checksum type to crc32c, rc = 0 PASS 810 (5s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 812a: do not drop reqs generated when imp is going to idle (LU-11951) ========================================================== 00:13:39 (1713413619) osc.lustre-OST0000-osc-ffff88012a451000.idle_timeout=10 osc.lustre-OST0001-osc-ffff88012a451000.idle_timeout=10 oleg146-client.virtnet: executing wait_import_state FULL osc.lustre-OST0000-osc-ffff88012a451000.ost_server_uuid 50 osc.lustre-OST0000-osc-ffff88012a451000.ost_server_uuid in FULL state after 0 sec fail_loc=0x245 fail_val=8 oleg146-client.virtnet: executing wait_import_state CONNECTING osc.lustre-OST0000-osc-ffff88012a451000.ost_server_uuid 50 osc.lustre-OST0000-osc-ffff88012a451000.ost_server_uuid in CONNECTING state after 9 sec fail_loc=0 fail_val=0 osc.lustre-OST0000-osc-ffff88012a451000.idle_timeout=20 osc.lustre-OST0001-osc-ffff88012a451000.idle_timeout=20 PASS 812a (15s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 812b: do not drop no resend request for idle connect ========================================================== 00:13:57 (1713413637) osc.lustre-OST0000-osc-ffff88012a451000.idle_timeout=10 osc.lustre-OST0001-osc-ffff88012a451000.idle_timeout=10 oleg146-client.virtnet: executing wait_import_state FULL osc.lustre-OST0000-osc-ffff88012a451000.ost_server_uuid 50 osc.lustre-OST0000-osc-ffff88012a451000.ost_server_uuid in FULL state after 0 sec fail_loc=0x245 fail_val=8 oleg146-client.virtnet: executing wait_import_state CONNECTING osc.lustre-OST0000-osc-ffff88012a451000.ost_server_uuid 50 osc.lustre-OST0000-osc-ffff88012a451000.ost_server_uuid in CONNECTING state after 12 sec fail_loc=0 fail_val=0 Disk quotas for usr 0 (uid 0): Filesystem kbytes quota limit grace files quota limit grace /mnt/lustre/ 169444 0 0 - 16625 0 0 - oleg146-client.virtnet: executing wait_import_state IDLE osc.lustre-OST0000-osc-ffff88012a451000.ost_server_uuid 50 osc.lustre-OST0000-osc-ffff88012a451000.ost_server_uuid in IDLE state after 13 sec osc.lustre-OST0000-osc-ffff88012a451000.idle_timeout=20 osc.lustre-OST0001-osc-ffff88012a451000.idle_timeout=20 PASS 812b (32s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 812c: idle import vs lock enqueue race ==== 00:14:31 (1713413671) /mnt/lustre/f812c.sanity lmm_stripe_count: 1 lmm_stripe_size: 4194304 lmm_pattern: raid0 lmm_layout_gen: 0 lmm_stripe_offset: 0 obdidx objid objid group 0 46024 0xb3c8 0x280000bd1 osc.lustre-OST0000-osc-ffff88012a451000.idle_timeout=10 osc.lustre-OST0001-osc-ffff88012a451000.idle_timeout=10 oleg146-client.virtnet: executing wait_import_state FULL osc.lustre-OST0000-osc-ffff88012a451000.ost_server_uuid 50 osc.lustre-OST0000-osc-ffff88012a451000.ost_server_uuid in FULL state after 0 sec fail_loc=0x80000533 1+0 records in 1+0 records out 512 bytes (512 B) copied, 0.505689 s, 1.0 kB/s osc.lustre-OST0000-osc-ffff88012a451000.idle_timeout=20 osc.lustre-OST0001-osc-ffff88012a451000.idle_timeout=20 PASS 812c (18s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 813: File heat verfication ================ 00:14:51 (1713413691) Turn on file heat Period second: 60, Decay percentage: 80 flags: 0 readsample: 3 writesample: 2 readbyte: 16 writebyte: 12 Sleep 63 seconds... flags: 0 readsample: 3 writesample: 2 readbyte: 16 writebyte: 12 Sleep 63 seconds... flags: 0 readsample: 3 writesample: 2 readbyte: 19 writebyte: 14 Turn off file heat for the file /mnt/lustre/f813.sanity flags: 2 readsample: 0 writesample: 0 readbyte: 0 writebyte: 0 Trun on file heat for the file /mnt/lustre/f813.sanity flags: 0 readsample: 3 writesample: 2 readbyte: 16 writebyte: 12 Turn off file heat support for the Lustre filesystem flags: 0 readsample: 0 writesample: 0 readbyte: 0 writebyte: 0 PASS 813 (129s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 814: sparse cp works as expected (LU-12361) ========================================================== 00:17:02 (1713413822) 0+0 records in 0+0 records out 0 bytes (0 B) copied, 0.00222519 s, 0.0 kB/s PASS 814 (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 815: zero byte tiny write doesn't hang (LU-12382) ========================================================== 00:17:07 (1713413827) PASS 815 (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 816: do not reset lru_resize on idle reconnect ========================================================== 00:17:12 (1713413832) osc.lustre-OST0000-osc-ffff88012a451000.idle_timeout=10 osc.lustre-OST0001-osc-ffff88012a451000.idle_timeout=10 oleg146-client.virtnet: executing wait_import_state FULL osc.lustre-OST0000-osc-ffff88012a451000.ost_server_uuid 50 osc.lustre-OST0000-osc-ffff88012a451000.ost_server_uuid in FULL state after 0 sec ldlm.namespaces.lustre-OST0000-osc-ffff88012a451000.lru_size=400 ldlm.namespaces.lustre-OST0001-osc-ffff88012a451000.lru_size=400 oleg146-client.virtnet: executing wait_import_state IDLE osc.lustre-OST0000-osc-ffff88012a451000.ost_server_uuid 50 osc.lustre-OST0000-osc-ffff88012a451000.ost_server_uuid in IDLE state after 11 sec 0+0 records in 0+0 records out 0 bytes (0 B) copied, 0.00241027 s, 0.0 kB/s osc.lustre-OST0000-osc-ffff88012a451000.idle_timeout=20 osc.lustre-OST0001-osc-ffff88012a451000.idle_timeout=20 PASS 816 (16s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 817: nfsd won't cache write lock for exec file ========================================================== 00:17:31 (1713413851) PASS 817 (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 818: unlink with failed llog ============== 00:17:36 (1713413856) striped dir -i0 -c1 -H crush2 /mnt/lustre/d818.sanity lfs setstripe: setstripe error for '/mnt/lustre/d818.sanity/f818.sanity': stripe already set Stopping /mnt/lustre-mds1 (opts:) on oleg146-server fail_loc=0x80002105 Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg146-server: oleg146-server.virtnet: executing set_default_debug all all pdsh@oleg146-client: oleg146-server: ssh exited with exit code 1 Started lustre-MDT0000 [12257.025607] LustreError: 7241:0:(osp_sync.c:335:osp_sync_declare_add()) logging isn't available, run LFSCK Failing mds1 on oleg146-server Stopping /mnt/lustre-mds1 (opts:) on oleg146-server 00:17:49 (1713413869) shut down Failover mds1 to oleg146-server mount facets: mds1 Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg146-server: oleg146-server.virtnet: executing set_default_debug all all pdsh@oleg146-client: oleg146-server: ssh exited with exit code 1 Started lustre-MDT0000 00:18:03 (1713413883) targets are mounted 00:18:03 (1713413883) facet_failover done oleg146-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec PASS 818 (33s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 819a: too big niobuf in read ============== 00:18:11 (1713413891) 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0498606 s, 21.0 MB/s fail_loc=0x80000248 dd: error reading '/mnt/lustre/f819a.sanity': Value too large for defined data type 0+0 records in 0+0 records out 0 bytes (0 B) copied, 0.0322886 s, 0.0 kB/s PASS 819a (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 819b: too big niobuf in write ============= 00:18:16 (1713413896) fail_loc=0x80000248 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0487416 s, 21.5 MB/s PASS 819b (20s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 820: update max EA from open intent ======= 00:18:38 (1713413918) 192.168.201.146@tcp:/lustre /mnt/lustre lustre rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project 0 0 Stopping client oleg146-client.virtnet /mnt/lustre (opts:) Stopping /mnt/lustre-ost1 (opts:) on oleg146-server Stopping /mnt/lustre-ost2 (opts:) on oleg146-server Starting client: oleg146-client.virtnet: -o user_xattr,flock oleg146-server@tcp:/lustre /mnt/lustre striped dir -i1 -c1 -H fnv_1a_64 /mnt/lustre/d820.sanity/mds2 Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg146-server: oleg146-server.virtnet: executing set_default_debug all all pdsh@oleg146-client: oleg146-server: ssh exited with exit code 1 Started lustre-OST0000 Starting ost2: -o localrecov /dev/mapper/ost2_flakey /mnt/lustre-ost2 seq.cli-lustre-OST0001-super.width=65536 oleg146-server: oleg146-server.virtnet: executing set_default_debug all all pdsh@oleg146-client: oleg146-server: ssh exited with exit code 1 Started lustre-OST0001 PASS 820 (26s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 823: Setting create_count > OST_MAX_PRECREATE is lowered to maximum ========================================================== 00:19:05 (1713413945) setting create_count to 100200: -result- count: 9984 with max: 20000, expecting: 9984 PASS 823 (6s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 831: throttling unlink/setattr queuing on OSP ========================================================== 00:19:13 (1713413953) total: 1000 open/close in 3.32 seconds: 301.28 ops/second - unlinked 0 (time 1713413961 ; total 0 ; last 0) total: 1000 unlinks in 77 seconds: 12.987013 unlinks/second PASS 831 (89s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 832: lfs rm_entry ========================= 00:20:45 (1713414045) lfs rm_entry: unable to open 'exists': No such file or directory (2) lfs rm_entry: remove dir entry '/mnt/lustre/dir/not/exists' failed: No such file or directory PASS 832 (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 833: Mixed buffered/direct read and write should not return -EIO ========================================================== 00:20:50 (1713414050) 50+0 records in 50+0 records out 52428800 bytes (52 MB) copied, 1.26523 s, 41.4 MB/s 50+0 records in 50+0 records out 52428800 bytes (52 MB) copied, 0.0204671 s, 2.6 GB/s 50+0 records in 50+0 records out 52428800 bytes (52 MB) copied, 1.04858 s, 50.0 MB/s 50+0 records in 50+0 records out 52428800 bytes (52 MB) copied, 2.73584 s, 19.2 MB/s 50+0 records in 50+0 records out 52428800 bytes (52 MB) copied, 1.10775 s, 47.3 MB/s 50+0 records in 50+0 records out 52428800 bytes (52 MB) copied, 3.48851 s, 15.0 MB/s 50+0 records in 50+0 records out 52428800 bytes (52 MB) copied, 0.602148 s, 87.1 MB/s 50+0 records in 50+0 records out 52428800 bytes (52 MB) copied, 1.16479 s, 45.0 MB/s 50+0 records in 50+0 records out 52428800 bytes (52 MB) copied, 2.63845 s, 19.9 MB/s 50+0 records in 50+0 records out 52428800 bytes (52 MB) copied, 0.632613 s, 82.9 MB/s 50+0 records in 50+0 records out 52428800 bytes (52 MB) copied, 3.11914 s, 16.8 MB/s 50+0 records in 50+0 records out 52428800 bytes (52 MB) copied, 1.48355 s, 35.3 MB/s 50+0 records in 50+0 records out 52428800 bytes (52 MB) copied, 2.46206 s, 21.3 MB/s 50+0 records in 50+0 records out 52428800 bytes (52 MB) copied, 0.607197 s, 86.3 MB/s 50+0 records in 50+0 records out 52428800 bytes (52 MB) copied, 2.49081 s, 21.0 MB/s 50+0 records in 50+0 records out 52428800 bytes (52 MB) copied, 0.646186 s, 81.1 MB/s 50+0 records in 50+0 records out 52428800 bytes (52 MB) copied, 2.703 s, 19.4 MB/s 50+0 records in 50+0 records out 52428800 bytes (52 MB) copied, 1.1376 s, 46.1 MB/s 50+0 records in 50+0 records out 52428800 bytes (52 MB) copied, 2.29828 s, 22.8 MB/s 50+0 records in 50+0 records out 52428800 bytes (52 MB) copied, 0.76968 s, 68.1 MB/s 50+0 records in 50+0 records out 52428800 bytes (52 MB) copied, 1.14741 s, 45.7 MB/s 50+0 records in 50+0 records out 52428800 bytes (52 MB) copied, 0.659482 s, 79.5 MB/s 50+0 records in 50+0 records out 52428800 bytes (52 MB) copied, 3.5144 s, 14.9 MB/s 50+0 records in 50+0 records out 52428800 bytes (52 MB) copied, 0.83737 s, 62.6 MB/s 50+0 records in 50+0 records out 52428800 bytes (52 MB) copied, 3.59675 s, 14.6 MB/s 50+0 records in 50+0 records out 52428800 bytes (52 MB) copied, 1.26799 s, 41.3 MB/s 50+0 records in 50+0 records out 52428800 bytes (52 MB) copied, 2.44959 s, 21.4 MB/s 50+0 records in 50+0 records out 52428800 bytes (52 MB) copied, 0.99498 s, 52.7 MB/s 50+0 records in 50+0 records out 52428800 bytes (52 MB) copied, 2.58159 s, 20.3 MB/s 50+0 records in 50+0 records out 52428800 bytes (52 MB) copied, 1.21531 s, 43.1 MB/s 50+0 records in 50+0 records out 52428800 bytes (52 MB) copied, 2.25244 s, 23.3 MB/s 50+0 records in 50+0 records out 52428800 bytes (52 MB) copied, 0.74229 s, 70.6 MB/s 50+0 records in 50+0 records out 52428800 bytes (52 MB) copied, 2.85137 s, 18.4 MB/s 50+0 records in 50+0 records out 52428800 bytes (52 MB) copied, 0.958349 s, 54.7 MB/s 50+0 records in 50+0 records out 52428800 bytes (52 MB) copied, 2.04202 s, 25.7 MB/s 50+0 records in 50+0 records out 52428800 bytes (52 MB) copied, 1.07196 s, 48.9 MB/s 50+0 records in 50+0 records out 52428800 bytes (52 MB) copied, 2.30381 s, 22.8 MB/s 50+0 records in 50+0 records out 52428800 bytes (52 MB) copied, 1.61438 s, 32.5 MB/s 50+0 records in 50+0 records out 52428800 bytes (52 MB) copied, 3.02663 s, 17.3 MB/s 50+0 records in 50+0 records out 52428800 bytes (52 MB) copied, 0.507167 s, 103 MB/s 50+0 records in 50+0 records out 52428800 bytes (52 MB) copied, 2.24864 s, 23.3 MB/s 50+0 records in 50+0 records out 52428800 bytes (52 MB) copied, 0.784947 s, 66.8 MB/s 50+0 records in 50+0 records out 52428800 bytes (52 MB) copied, 2.28977 s, 22.9 MB/s 50+0 records in 50+0 records out 52428800 bytes (52 MB) copied, 1.05023 s, 49.9 MB/s 50+0 records in 50+0 records out 52428800 bytes (52 MB) copied, 3.47521 s, 15.1 MB/s 50+0 records in 50+0 records out 52428800 bytes (52 MB) copied, 6.25744 s, 8.4 MB/s 50+0 records in 50+0 records out 52428800 bytes (52 MB) copied, 5.00229 s, 10.5 MB/s PASS 833 (37s) debug_raw_pointers=0 debug_raw_pointers=0 SKIP: sanity test_842 skipping SLOW test 842 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 850: lljobstat can parse living and aggregated job_stats ========================================================== 00:21:30 (1713414090) striped dir -i0 -c2 -H crush2 /mnt/lustre/d850.sanity error: list_param: param_path '*/*/job_stats': No such file or directory error: list_param: listing '*/*/job_stats': No such file or directory --- timestamp: 1713414091 top_jobs: ... error: get_param: param_path '*/*/job_stats': No such file or directory --- timestamp: 1713414091 top_jobs: ... PASS 850 (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 851: fanotify can monitor open/read/write/close events for lustre fs ========================================================== 00:21:36 (1713414096) striped dir -i1 -c2 -H all_char /mnt/lustre/d851.sanity localhost: Warning: Permanently added 'localhost' (ECDSA) to the list of known hosts. open:/mnt/lustre/d851.sanity/f_test_851_7509:6538:bash write:/mnt/lustre/d851.sanity/f_test_851_7509:6538:bash close:/mnt/lustre/d851.sanity/f_test_851_7509:6538:bash 1234567890 open:/mnt/lustre/d851.sanity/f_test_851_7509:6779:cat read:/mnt/lustre/d851.sanity/f_test_851_7509:6779:cat close:/mnt/lustre/d851.sanity/f_test_851_7509:6779: PASS 851 (5s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 900: umount should not race with any mgc requeue thread ========================================================== 00:21:42 (1713414102) fail_loc=0x903 cln..Failing mds1 on oleg146-server Stopping /mnt/lustre-mds1 (opts:) on oleg146-server 00:21:45 (1713414105) shut down Failover mds1 to oleg146-server mount facets: mds1 pdsh@oleg146-client: oleg146-server: ssh exited with exit code 1 Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg146-server: oleg146-server.virtnet: executing set_default_debug all all pdsh@oleg146-client: oleg146-server: ssh exited with exit code 1 Started lustre-MDT0000 00:22:00 (1713414120) targets are mounted 00:22:00 (1713414120) facet_failover done oleg146-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec Stopping clients: oleg146-client.virtnet /mnt/lustre (opts:) Stopping client oleg146-client.virtnet /mnt/lustre opts: Stopping clients: oleg146-client.virtnet /mnt/lustre2 (opts:) Stopping /mnt/lustre-mds1 (opts:-f) on oleg146-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg146-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg146-server Stopping /mnt/lustre-ost2 (opts:-f) on oleg146-server unloading modules on: 'oleg146-server' oleg146-server: oleg146-server.virtnet: executing unload_modules_local modules unloaded. mnt..Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions libkmod: kmod_module_get_holders: could not open '/sys/module/acpi_cpufreq/holders': No such file or directory ../libcfs/libcfs/libcfs options: 'cpu_npartitions=2' ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' quota/lquota options: 'hash_lqs_cur_bits=3' loading modules on: 'oleg146-server' oleg146-server: oleg146-server.virtnet: executing load_modules_local oleg146-server: Loading modules from /home/green/git/lustre-release/lustre oleg146-server: detected 4 online CPUs by sysfs oleg146-server: Force libcfs to create 2 CPU partitions oleg146-server: ptlrpc/ptlrpc options: 'lbug_on_grant_miscount=1' oleg146-server: quota/lquota options: 'hash_lqs_cur_bits=3' Checking servers environments Checking clients oleg146-client.virtnet environments Loading modules from /home/green/git/lustre-release/lustre detected 4 online CPUs by sysfs Force libcfs to create 2 CPU partitions loading modules on: 'oleg146-server' oleg146-server: oleg146-server.virtnet: executing load_modules_local oleg146-server: Loading modules from /home/green/git/lustre-release/lustre oleg146-server: detected 4 online CPUs by sysfs oleg146-server: Force libcfs to create 2 CPU partitions Setup mgs, mdt, osts pdsh@oleg146-client: oleg146-server: ssh exited with exit code 1 Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg146-server: oleg146-server.virtnet: executing set_default_debug all all pdsh@oleg146-client: oleg146-server: ssh exited with exit code 1 Started lustre-MDT0000 pdsh@oleg146-client: oleg146-server: ssh exited with exit code 1 Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg146-server: oleg146-server.virtnet: executing set_default_debug all all pdsh@oleg146-client: oleg146-server: ssh exited with exit code 1 Started lustre-MDT0001 pdsh@oleg146-client: oleg146-server: ssh exited with exit code 1 Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg146-server: oleg146-server.virtnet: executing set_default_debug all all pdsh@oleg146-client: oleg146-server: ssh exited with exit code 1 Started lustre-OST0000 pdsh@oleg146-client: oleg146-server: ssh exited with exit code 1 Starting ost2: -o localrecov /dev/mapper/ost2_flakey /mnt/lustre-ost2 seq.cli-lustre-OST0001-super.width=65536 oleg146-server: oleg146-server.virtnet: executing set_default_debug all all pdsh@oleg146-client: oleg146-server: ssh exited with exit code 1 Started lustre-OST0001 Starting client: oleg146-client.virtnet: -o user_xattr,flock oleg146-server@tcp:/lustre /mnt/lustre Starting client oleg146-client.virtnet: -o user_xattr,flock oleg146-server@tcp:/lustre /mnt/lustre Started clients oleg146-client.virtnet: 192.168.201.146@tcp:/lustre on /mnt/lustre type lustre (rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project) Using TIMEOUT=20 osc.lustre-OST0000-osc-ffff88012e143000.idle_timeout=debug osc.lustre-OST0001-osc-ffff88012e143000.idle_timeout=debug disable quota as required done PASS 900 (169s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 901: don't leak a mgc lock on client umount ========================================================== 00:24:33 (1713414273) 192.168.201.146@tcp:/lustre /mnt/lustre lustre rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project 0 0 Stopping client oleg146-client.virtnet /mnt/lustre (opts:) Starting client: oleg146-client.virtnet: -o user_xattr,flock oleg146-server@tcp:/lustre /mnt/lustre PASS 901 (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 902: test short write doesn't hang lustre ========================================================== 00:24:40 (1713414280) fail_loc=0x1415 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0811217 s, 12.9 MB/s PASS 902 (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 903: Test long page discard does not cause evictions ========================================================== 00:24:45 (1713414285) 6+0 records in 6+0 records out 6291456 bytes (6.3 MB) copied, 0.217995 s, 28.9 MB/s fail_loc=0x417 fail_val=20 Waiting for MDT destroys to complete Waiting 0s for local destroys to complete Waiting 1s for local destroys to complete Waiting 2s for local destroys to complete Waiting 3s for local destroys to complete Waiting 4s for local destroys to complete Waiting 5s for local destroys to complete Waiting 6s for local destroys to complete Waiting 7s for local destroys to complete Waiting 8s for local destroys to complete Waiting 9s for local destroys to complete Waiting 10s for local destroys to complete Waiting 11s for local destroys to complete Waiting 12s for local destroys to complete Waiting 13s for local destroys to complete Waiting 14s for local destroys to complete Waiting 15s for local destroys to complete Waiting 16s for local destroys to complete Waiting 17s for local destroys to complete Waiting 18s for local destroys to complete Waiting 19s for local destroys to complete Waiting 20s for local destroys to complete Waiting 21s for local destroys to complete Waiting 22s for local destroys to complete Waiting 23s for local destroys to complete Waiting 24s for local destroys to complete Waiting 25s for local destroys to complete Waiting 26s for local destroys to complete Waiting 27s for local destroys to complete Waiting 28s for local destroys to complete Waiting 29s for local destroys to complete Waiting 30s for local destroys to complete Waiting 31s for local destroys to complete Waiting 32s for local destroys to complete Waiting 33s for local destroys to complete Waiting 34s for local destroys to complete Waiting 35s for local destroys to complete Waiting 36s for local destroys to complete Waiting 37s for local destroys to complete Waiting 38s for local destroys to complete Waiting 39s for local destroys to complete Waiting 40s for local destroys to complete Waiting 41s for local destroys to complete Waiting 42s for local destroys to complete Waiting 43s for local destroys to complete Waiting 44s for local destroys to complete Waiting 45s for local destroys to complete Waiting 46s for local destroys to complete Waiting 47s for local destroys to complete Waiting 48s for local destroys to complete Waiting 49s for local destroys to complete Waiting 50s for local destroys to complete Waiting 51s for local destroys to complete Waiting 52s for local destroys to complete Waiting 53s for local destroys to complete Waiting 54s for local destroys to complete Waiting 55s for local destroys to complete Waiting 56s for local destroys to complete Waiting 57s for local destroys to complete Waiting 58s for local destroys to complete Waiting 59s for local destroys to complete Waiting 60s for local destroys to complete Waiting 61s for local destroys to complete Waiting 62s for local destroys to complete Waiting 63s for local destroys to complete Waiting 64s for local destroys to complete Waiting 65s for local destroys to complete Waiting 66s for local destroys to complete Waiting 67s for local destroys to complete Waiting 68s for local destroys to complete Waiting 69s for local destroys to complete Waiting 70s for local destroys to complete Waiting 71s for local destroys to complete Waiting 72s for local destroys to complete Waiting 73s for local destroys to complete Waiting 74s for local destroys to complete Waiting 75s for local destroys to complete Waiting 76s for local destroys to complete Waiting 77s for local destroys to complete Waiting 78s for local destroys to complete Waiting 79s for local destroys to complete Waiting 80s for local destroys to complete Waiting 81s for local destroys to complete Waiting 82s for local destroys to complete Waiting 83s for local destroys to complete Waiting 84s for local destroys to complete Waiting 85s for local destroys to complete Waiting 86s for local destroys to complete Waiting 87s for local destroys to complete Waiting 88s for local destroys to complete Waiting 89s for local destroys to complete Waiting 90s for local destroys to complete Waiting 91s for local destroys to complete Waiting 92s for local destroys to complete Waiting 93s for local destroys to complete PASS 903 (138s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 904: virtual project ID xattr ============= 00:27:06 (1713414426) oleg146-server: debugfs 1.46.2.wc5 (26-Mar-2022) osd-ldiskfs.lustre-MDT0000.enable_projid_xattr=0 osd-ldiskfs.lustre-MDT0001.enable_projid_xattr=0 getfattr: Removing leading '/' from absolute path names osd-ldiskfs.lustre-MDT0000.enable_projid_xattr=1 osd-ldiskfs.lustre-MDT0001.enable_projid_xattr=1 getfattr: Removing leading '/' from absolute path names getfattr: Removing leading '/' from absolute path names getfattr: Removing leading '/' from absolute path names trusted.projid getfattr: Removing leading '/' from absolute path names setfattr: /mnt/lustre/d904.sanity/f904.sanity: Invalid argument getfattr: Removing leading '/' from absolute path names getfattr: Removing leading '/' from absolute path names osd-ldiskfs.lustre-MDT0000.enable_projid_xattr=0 osd-ldiskfs.lustre-MDT0001.enable_projid_xattr=0 PASS 904 (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 905: bad or new opcode should not stuck client ========================================================== 00:27:13 (1713414433) fail_val=21 fail_loc=0x0253 lfs ladvise: cannot give advice: Operation not supported (95) ladvise: cannot give advice 'willread' to file '/mnt/lustre/f905.sanity': Operation not supported PASS 905 (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 906: Simple test for io_uring I/O engine via fio ========================================================== 00:27:18 (1713414438) SKIP: sanity test_906 Client OS does not support io_uring I/O engine SKIP 906 (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 907: write rpc error during unlink ======== 00:27:22 (1713414442) /mnt/lustre/f907.sanity lmm_stripe_count: 2 lmm_stripe_size: 4194304 lmm_pattern: raid0 lmm_layout_gen: 0 lmm_stripe_offset: 0 obdidx objid objid group 0 56134 0xdb46 0x280000bd1 1 47044 0xb7c4 0x2c0000403 fail_val=3 fail_loc=0x80000216 17+0 records in 17+0 records out 4456448 bytes (4.5 MB) copied, 0.125513 s, 35.5 MB/s PASS 907 (4s) debug_raw_pointers=0 debug_raw_pointers=0 == sanity test complete, duration 12740 sec ============== 00:27:27 (1713414447) === sanity: start cleanup 00:27:28 (1713414448) === === sanity: finish cleanup 00:29:53 (1713414593) === debug=super ioctl neterror warning dlmtrace error emerg ha rpctrace vfstrace config console lfsck