-----============= acceptance-small: sanity ============----- Thu Apr 18 20:16:31 EDT 2024 excepting tests: 56oc 42a 42c 42b 118c 118d 407 411b skipping tests SLOW=no: 27m 60i 64b 68 71 135 136 230d 300o 842 === sanity: start setup 20:16:35 (1713485795) === oleg329-client.virtnet: executing check_config_client /mnt/lustre oleg329-client.virtnet: Checking config lustre mounted on /mnt/lustre Checking servers environments Checking clients oleg329-client.virtnet environments Using TIMEOUT=20 osc.lustre-OST0000-osc-ffff8800b6384000.idle_timeout=debug osc.lustre-OST0001-osc-ffff8800b6384000.idle_timeout=debug disable quota as required oleg329-server: oleg329-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all osd-ldiskfs.track_declares_assert=1 === sanity: finish setup 20:16:43 (1713485803) === running as uid/gid/euid/egid 500/500/500/500, groups: [true] running as uid/gid/euid/egid 500/500/500/500, groups: [touch] [/mnt/lustre/d0_runas_test/f7531] preparing for tests involving mounts mke2fs 1.46.2.wc5 (26-Mar-2022) debug=all debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 0a: touch; rm ============================= 20:16:44 (1713485804) /mnt/lustre/f0a.sanity has type file OK /mnt/lustre/f0a.sanity: absent OK PASS 0a (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 0b: chmod 0755 /mnt/lustre ======================================================================================= 20:16:48 (1713485808) /mnt/lustre has perms 0755 OK PASS 0b (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 0c: check import proc ===================== 20:16:51 (1713485811) state: FULL state: FULL target: lustre-MDT0000_UUID target: lustre-MDT0001_UUID PASS 0c (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 0d: check export proc ======================================================================================= 20:16:55 (1713485815) mgc.MGC192.168.203.129@tcp.import= import: name: MGC192.168.203.129@tcp target: MGS state: FULL connect_flags: [ version, barrier, adaptive_timeouts, full20, imp_recov, bulk_mbits, second_flags, reply_mbits, large_nid ] connect_data: flags: 0xa000011001002020 instance: 0 target_version: 2.15.62.25 import_flags: [ pingable, connect_tried ] connection: failover_nids: [ "192.168.203.129@tcp" ] nids_stats: "192.168.203.129@tcp": { connects: 1, replied: 1, uptodate: false, sec_ago: 45 } current_connection: "192.168.203.129@tcp" connection_attempts: 1 generation: 1 in-progress_invalidations: 0 idle: 11 sec mgs.MGS.exports.192.168.203.29@tcp.export= 9f48186b-1ad8-433d-8347-0b2a55f8d930: name: MGS client: 192.168.203.29@tcp connect_flags: [ version, barrier, adaptive_timeouts, full20, imp_recov, bulk_mbits, second_flags, reply_mbits, large_nid ] connect_data: flags: 0xa000011001002020 instance: 0 target_version: 2.15.62.25 export_flags: [ ] PASS 0d (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 0e: Enable DNE MDT balancing for mkdir in the ROOT ========================================================== 20:17:00 (1713485820) PASS 0e (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 0f: Symlink to /sys/kernel/debug/*/*/brw_stats should work properly ========================================================== 20:17:03 (1713485823) PASS 0f (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 1: mkdir; remkdir; rmdir ================== 20:17:07 (1713485827) striped dir -i1 -c2 -H all_char /mnt/lustre/d1.sanity striped dir -i1 -c2 -H fnv_1a_64 /mnt/lustre/d1.sanity/d2 mkdir: cannot create directory '/mnt/lustre/d1.sanity/d2': File exists /mnt/lustre/d1.sanity/d2 has type dir OK /mnt/lustre/d1.sanity: absent OK PASS 1 (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 2: mkdir; touch; rmdir; check file ======== 20:17:11 (1713485831) striped dir -i0 -c2 -H all_char /mnt/lustre/d2.sanity /mnt/lustre/d2.sanity/f2.sanity has type file OK /mnt/lustre/d2.sanity/f2.sanity: absent OK PASS 2 (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 3: mkdir; touch; rmdir; check dir ========= 20:17:15 (1713485835) striped dir -i1 -c2 -H crush2 /mnt/lustre/d3.sanity /mnt/lustre/d3.sanity has type dir OK /mnt/lustre/d3.sanity/f3.sanity has type file OK /mnt/lustre/d3.sanity: absent OK PASS 3 (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 4: mkdir; touch dir/file; rmdir; checkdir (expect error) ========================================================== 20:17:19 (1713485839) striped dir -i1 -c2 -H crush2 /mnt/lustre/d4.sanity rmdir: failed to remove '/mnt/lustre/d4.sanity': Directory not empty PASS 4 (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 5: mkdir .../d5 .../d5/d2; chmod .../d5/d2 ========================================================== 20:17:22 (1713485842) striped dir -i1 -c2 -H crush2 /mnt/lustre/d5.sanity striped dir -i1 -c2 -H all_char /mnt/lustre/d5.sanity/d2 /mnt/lustre/d5.sanity/d2 has type dir OK /mnt/lustre/d5.sanity/d2 has perms 0707 OK /mnt/lustre/d5.sanity/d2 has type dir OK PASS 5 (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 6a: touch f6a; chmod f6a; runas -u 500 -g 500 chmod f6a (should return error) ============================================================ 20:17:26 (1713485846) /mnt/lustre/f6a.sanity has type file OK /mnt/lustre/f6a.sanity has perms 0666 OK /mnt/lustre/f6a.sanity is owned by user #0 OK running as uid/gid/euid/egid 500/500/500/500, groups: [chmod] [0444] [/mnt/lustre/f6a.sanity] chmod: changing permissions of '/mnt/lustre/f6a.sanity': Operation not permitted /mnt/lustre/f6a.sanity has type file OK /mnt/lustre/f6a.sanity has perms 0666 OK /mnt/lustre/f6a.sanity is owned by user #0 OK PASS 6a (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 6c: touch f6c; chown f6c; runas -u 500 -g 500 chown f6c (should return error) ============================================================ 20:17:30 (1713485850) /mnt/lustre/f6c.sanity has type file OK /mnt/lustre/f6c.sanity is owned by user #500 OK running as uid/gid/euid/egid 500/500/500/500, groups: [chown] [0] [/mnt/lustre/f6c.sanity] chown: changing ownership of '/mnt/lustre/f6c.sanity': Operation not permitted /mnt/lustre/f6c.sanity has type file OK /mnt/lustre/f6c.sanity is owned by user #500 OK PASS 6c (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 6e: touch+chgrp ; runas -u 500 -g 500 chgrp (should return error) ========================================================== 20:17:33 (1713485853) /mnt/lustre/f6e.sanity has type file OK /mnt/lustre/f6e.sanity is owned by user #0 OK /mnt/lustre/f6e.sanity is owned by group #500 OK running as uid/gid/euid/egid 500/500/500/500, groups: [chgrp] [0] [/mnt/lustre/f6e.sanity] chgrp: changing group of '/mnt/lustre/f6e.sanity': Operation not permitted /mnt/lustre/f6e.sanity has type file OK /mnt/lustre/f6e.sanity is owned by user #0 OK /mnt/lustre/f6e.sanity is owned by group #500 OK PASS 6e (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 6g: verify new dir in sgid dir inherits group ========================================================== 20:17:37 (1713485857) striped dir -i0 -c2 -H fnv_1a_64 /mnt/lustre/d6g.sanity running as uid/gid/euid/egid 500/500/500/500, groups: [mkdir] [/mnt/lustre/d6g.sanity/d] striped dir -i0 -c2 -H crush /mnt/lustre/d6g.sanity/d/subdir /mnt/lustre/d6g.sanity/d/subdir is owned by group #500 OK /mnt/lustre/d6g.sanity.local/d6g.sanity.remote is owned by group #500 OK /mnt/lustre/d6g.sanity.local/d6g.sanity.remote has perms 02755 OK PASS 6g (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 6h: runas -u 500 -g 500 chown RUNAS_ID.0 .../ (should return error) ========================================================== 20:17:41 (1713485861) running as uid/gid/euid/egid 500/500/500/500, groups: 500 [chown] [500:0] [/mnt/lustre/f6h.sanity] chown: changing ownership of '/mnt/lustre/f6h.sanity': Operation not permitted /mnt/lustre/f6h.sanity has type file OK /mnt/lustre/f6h.sanity is owned by user #500 OK /mnt/lustre/f6h.sanity is owned by group #500 OK PASS 6h (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 7a: mkdir .../d7; mcreate .../d7/f; chmod .../d7/f ============================================================== 20:17:44 (1713485864) striped dir -i1 -c2 -H all_char /mnt/lustre/d7a.sanity /mnt/lustre/d7a.sanity/f7a.sanity has type file OK /mnt/lustre/d7a.sanity/f7a.sanity has perms 0666 OK PASS 7a (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 7b: mkdir .../d7; mcreate d7/f2; echo foo > d7/f2 =============================================================== 20:17:47 (1713485867) striped dir -i1 -c2 -H all_char /mnt/lustre/d7b.sanity /mnt/lustre/d7b.sanity/f7b.sanity has type file OK /mnt/lustre/d7b.sanity/f7b.sanity has size 3 OK PASS 7b (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 8: mkdir .../d8; touch .../d8/f; chmod .../d8/f ================================================================= 20:17:51 (1713485871) striped dir -i0 -c2 -H fnv_1a_64 /mnt/lustre/d8.sanity /mnt/lustre/d8.sanity/f8.sanity has type file OK /mnt/lustre/d8.sanity/f8.sanity has perms 0666 OK PASS 8 (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 9: mkdir .../d9 .../d9/d2 .../d9/d2/d3 ========================================================================== 20:17:54 (1713485874) striped dir -i1 -c2 -H crush2 /mnt/lustre/d9.sanity striped dir -i1 -c2 -H fnv_1a_64 /mnt/lustre/d9.sanity/d2 striped dir -i1 -c2 -H crush2 /mnt/lustre/d9.sanity/d2/d3 /mnt/lustre/d9.sanity/d2/d3 has type dir OK PASS 9 (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 10: mkdir .../d10 .../d10/d2; touch .../d10/d2/f ================================================================ 20:17:57 (1713485877) striped dir -i0 -c2 -H all_char /mnt/lustre/d10.sanity striped dir -i0 -c2 -H crush2 /mnt/lustre/d10.sanity/d2 /mnt/lustre/d10.sanity/d2/f10.sanity has type file OK PASS 10 (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 11: mkdir .../d11 d11/d2; chmod .../d11/d2 ====================================================================== 20:18:01 (1713485881) striped dir -i1 -c2 -H crush /mnt/lustre/d11.sanity striped dir -i1 -c2 -H crush /mnt/lustre/d11.sanity/d2 /mnt/lustre/d11.sanity/d2 has type dir OK /mnt/lustre/d11.sanity/d2 has perms 0705 OK PASS 11 (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 12: touch .../d12/f; chmod .../d12/f .../d12/f ================================================================== 20:18:04 (1713485884) striped dir -i0 -c2 -H crush /mnt/lustre/d12.sanity /mnt/lustre/d12.sanity/f12.sanity has type file OK /mnt/lustre/d12.sanity/f12.sanity has perms 0654 OK PASS 12 (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 13: creat .../d13/f; dd .../d13/f; > .../d13/f ================================================================== 20:18:08 (1713485888) striped dir -i1 -c2 -H fnv_1a_64 /mnt/lustre/d13.sanity 10+0 records in 10+0 records out 5120 bytes (5.1 kB) copied, 0.0075556 s, 678 kB/s /mnt/lustre/d13.sanity/f13.sanity has type file OK /mnt/lustre/d13.sanity/f13.sanity has size 0 OK PASS 13 (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 14: touch .../d14/f; rm .../d14/f; rm .../d14/f ================================================================= 20:18:11 (1713485891) striped dir -i0 -c2 -H crush /mnt/lustre/d14.sanity /mnt/lustre/d14.sanity/f14.sanity: absent OK PASS 14 (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 15: touch .../d15/f; mv .../d15/f .../d15/f2 ==================================================================== 20:18:14 (1713485894) striped dir -i1 -c2 -H all_char /mnt/lustre/d15.sanity /mnt/lustre/d15.sanity/f15.sanity_2 has type file OK PASS 15 (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 16: touch .../d16/f; rm -rf .../d16/f ===== 20:18:17 (1713485897) striped dir -i0 -c2 -H all_char /mnt/lustre/d16.sanity /mnt/lustre/d16.sanity/f16.sanity: absent OK PASS 16 (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 17a: symlinks: create, remove (real) ====== 20:18:21 (1713485901) striped dir -i1 -c2 -H all_char /mnt/lustre/d17a.sanity total 0 -rw-r--r-- 1 root root 0 Apr 18 20:18 f17a.sanity lrwxrwxrwx 1 root root 35 Apr 18 20:18 l-exist -> /mnt/lustre/d17a.sanity/f17a.sanity /mnt/lustre/d17a.sanity/l-exist links to /mnt/lustre/d17a.sanity/f17a.sanity OK /mnt/lustre/d17a.sanity/l-exist has type f OK /mnt/lustre/d17a.sanity/l-exist: absent OK PASS 17a (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 17b: symlinks: create, remove (dangling) == 20:18:24 (1713485904) striped dir -i1 -c2 -H crush /mnt/lustre/d17b.sanity total 0 lrwxrwxrwx 1 root root 12 Apr 18 20:18 l-dangle -> no-such-file /mnt/lustre/d17b.sanity/l-dangle links to no-such-file OK /mnt/lustre/d17b.sanity/l-dangle: absent OK /mnt/lustre/d17b.sanity/l-dangle: absent OK PASS 17b (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 17c: symlinks: open dangling (should return error) ========================================================== 20:18:27 (1713485907) striped dir -i1 -c2 -H crush2 /mnt/lustre/d17c.sanity cat: /mnt/lustre/d17c.sanity/f17c.sanity: No such file or directory PASS 17c (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 17d: symlinks: create dangling ============ 20:18:31 (1713485911) striped dir -i1 -c2 -H all_char /mnt/lustre/d17d.sanity PASS 17d (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 17e: symlinks: create recursive symlink (should return error) ========================================================== 20:18:34 (1713485914) striped dir -i1 -c2 -H crush /mnt/lustre/d17e.sanity lrwxrwxrwx 1 root root 35 Apr 18 20:18 /mnt/lustre/d17e.sanity/f17e.sanity -> /mnt/lustre/d17e.sanity/f17e.sanity ls: cannot access /mnt/lustre/d17e.sanity/f17e.sanity: Too many levels of symbolic links PASS 17e (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 17f: symlinks: long and very long symlink name ========================================================== 20:18:37 (1713485917) striped dir -i1 -c2 -H crush2 /mnt/lustre/d17f.sanity total 20 lrwxrwxrwx 1 root root 43 Apr 18 20:18 111 -> 1234567890/2234567890/3234567890/4234567890 lrwxrwxrwx 1 root root 65 Apr 18 20:18 222 -> 1234567890/2234567890/3234567890/4234567890/5234567890/6234567890 lrwxrwxrwx 1 root root 87 Apr 18 20:18 333 -> 1234567890/2234567890/3234567890/4234567890/5234567890/6234567890/7234567890/8234567890 lrwxrwxrwx 1 root root 120 Apr 18 20:18 444 -> 1234567890/2234567890/3234567890/4234567890/5234567890/6234567890/7234567890/8234567890/9234567890/a234567890/b234567890 lrwxrwxrwx 1 root root 153 Apr 18 20:18 555 -> 1234567890/2234567890/3234567890/4234567890/5234567890/6234567890/7234567890/8234567890/9234567890/a234567890/b234567890/c234567890/d234567890/f234567890 lrwxrwxrwx 1 root root 220 Apr 18 20:18 666 -> 1234567890/2234567890/3234567890/4234567890/5234567890/6234567890/7234567890/8234567890/9234567890/a234567890/b234567890/c234567890/d234567890/f234567890/aaaaaaaaaa/bbbbbbbbbb/cccccccccc/dddddddddd/eeeeeeeeee/ffffffffff/ PASS 17f (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 17g: symlinks: really long symlink name and inode boundaries ========================================================== 20:18:40 (1713485920) striped dir -i1 -c2 -H fnv_1a_64 /mnt/lustre/d17g.sanity xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx PASS 17g (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 17h: create objects: lov_free_memmd() doesn't lbug ========================================================== 20:18:44 (1713485924) striped dir -i1 -c2 -H crush /mnt/lustre/d17h.sanity fail_loc=0x80000141 PASS 17h (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 17i: don't panic on short symlink (should return error) ========================================================== 20:18:48 (1713485928) striped dir -i1 -c1 -H crush2 /mnt/lustre/d17i.sanity fail_loc=0x80000143 ls: cannot read symbolic link /mnt/lustre/d17i.sanity/f17i.sanity: Protocol error lrwxrwxrwx 1 root root 35 Apr 18 20:18 /mnt/lustre/d17i.sanity/f17i.sanity PASS 17i (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 17k: symlinks: rsync with xattrs enabled == 20:18:51 (1713485931) striped dir -i1 -c2 -H all_char /mnt/lustre/d17k.sanity striped dir -i1 -c2 -H fnv_1a_64 /mnt/lustre/d17k.sanity.new sending incremental file list ./ f17k.sanity f17k.sanity.lnk -> /mnt/lustre/d17k.sanity/f17k.sanity sent 867 bytes received 50 bytes 1,834.00 bytes/sec total size is 35 speedup is 0.04 PASS 17k (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 17l: Ensure lgetxattr's returned xattr size is consistent ========================================================== 20:18:54 (1713485934) striped dir -i1 -c2 -H fnv_1a_64 /mnt/lustre/d17l.sanity PASS 17l (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 17m: run e2fsck against MDT which contains short/long symlink ========================================================== 20:18:58 (1713485938) striped dir -i1 -c2 -H crush2 /mnt/lustre/d17m.sanity create 512 short and long symlink files under /mnt/lustre/d17m.sanity erase them Waiting for MDT destroys to complete recreate the 512 symlink files with a shorter string stop and checking mds2: Stopping /mnt/lustre-mds2 (opts:) on oleg329-server e2fsck -d -v -t -t -f -n /dev/mapper/mds2_flakey -m8 oleg329-server: e2fsck 1.46.2.wc5 (26-Mar-2022) oleg329-server: Use max possible thread num: 2 instead Pass 1: Checking inodes, blocks, and sizes [Thread 0] Scan group range [0, 16) [Thread 1] Scan group range [16, 32) [Thread 1] jumping to group 16 [Thread 0] jumping to group 0 [Thread 0] e2fsck_pass1_run:2564: increase inode 78 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 79 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 80 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 81 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 82 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 83 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 84 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 85 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 86 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 87 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 88 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 89 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 90 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 91 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 92 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 93 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 94 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 95 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 96 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 97 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 98 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 99 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 100 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 101 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 102 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 103 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 104 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 105 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 106 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 107 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 108 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 109 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 110 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 111 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 112 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 113 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 114 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 115 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 116 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 117 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 118 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 119 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 120 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 121 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 122 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 123 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 124 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 125 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 126 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 127 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 128 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 129 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 130 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 131 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 132 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 133 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 134 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 135 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 136 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 137 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 138 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 139 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 140 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 141 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 142 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 143 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 144 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 145 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 146 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 148 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 150 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 151 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 152 badness 0 to 2 for 10084 [Thread 1] group 17 finished [Thread 1] group 18 finished [Thread 1] group 19 finished [Thread 1] group 20 finished [Thread 1] group 21 finished [Thread 1] group 22 finished [Thread 1] group 23 finished [Thread 1] group 24 finished [Thread 1] group 25 finished [Thread 1] group 26 finished [Thread 1] group 27 finished [Thread 1] group 28 finished [Thread 1] group 29 finished [Thread 1] group 30 finished [Thread 1] group 31 finished [Thread 1] group 32 finished [Thread 1] Pass 1: Memory used: 740k/0k (510k/231k), time: 0.01/ 0.01/ 0.00 [Thread 1] Pass 1: I/O read: 3MB, write: 0MB, rate: 333.30MB/s [Thread 1] Scanned group range [16, 32), inodes 649 [Thread 0] group 1 finished [Thread 0] e2fsck_pass1_run:2564: increase inode 32001 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 32002 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 32003 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 32004 badness 0 to 2 for 10084 [Thread 0] group 2 finished [Thread 0] group 3 finished [Thread 0] group 4 finished [Thread 0] group 5 finished [Thread 0] group 6 finished [Thread 0] group 7 finished [Thread 0] group 8 finished [Thread 0] group 9 finished [Thread 0] group 10 finished [Thread 0] group 11 finished [Thread 0] group 12 finished [Thread 0] group 13 finished [Thread 0] group 14 finished [Thread 0] group 15 finished [Thread 0] group 16 finished [Thread 0] Pass 1: Memory used: 740k/0k (418k/323k), time: 0.01/ 0.01/ 0.00 [Thread 0] Pass 1: I/O read: 2MB, write: 0MB, rate: 189.48MB/s [Thread 0] Scanned group range [0, 16), inodes 214 Pass 2: Checking directory structure Pass 2: Memory used: 740k/0k (80k/661k), time: 0.01/ 0.00/ 0.00 Pass 2: I/O read: 3MB, write: 0MB, rate: 562.43MB/s Pass 3: Checking directory connectivity Peak memory: Memory used: 740k/0k (80k/661k), time: 0.05/ 0.04/ 0.01 Pass 3: Memory used: 740k/0k (77k/664k), time: 0.00/ 0.00/ 0.00 Pass 3: I/O read: 0MB, write: 0MB, rate: 0.00MB/s Pass 4: Checking reference counts Pass 4: Memory used: 740k/0k (72k/669k), time: 0.03/ 0.03/ 0.00 Pass 4: I/O read: 0MB, write: 0MB, rate: 0.00MB/s Pass 5: Checking group summary information Pass 5: Memory used: 740k/0k (71k/670k), time: 0.01/ 0.01/ 0.00 Pass 5: I/O read: 1MB, write: 0MB, rate: 118.46MB/s 861 inodes used (0.08%, out of 1024000) 4 non-contiguous files (0.5%) 1 non-contiguous directory (0.1%) # of inodes with ind/dind/tind blocks: 2/0/0 284497 blocks used (44.45%, out of 640000) 0 bad blocks 1 large file 146 regular files 190 directories 0 character device files 0 block device files 0 fifos 0 links 515 symbolic links (257 fast symbolic links) 0 sockets ------------ 851 files Memory used: 740k/0k (70k/671k), time: 0.09/ 0.08/ 0.01 I/O read: 4MB, write: 0MB, rate: 44.62MB/s Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg329-server: oleg329-server.virtnet: executing set_default_debug all all pdsh@oleg329-client: oleg329-server: ssh exited with exit code 1 Started lustre-MDT0001 PASS 17m (36s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 17n: run e2fsck against master/slave MDT which contains remote dir ========================================================== 20:19:36 (1713485976) striped dir -i1 -c2 -H all_char /mnt/lustre/d17n.sanity total: 10 open/close in 0.10 seconds: 104.33 ops/second total: 10 open/close in 0.09 seconds: 112.14 ops/second total: 10 open/close in 0.10 seconds: 98.23 ops/second total: 10 open/close in 0.09 seconds: 111.44 ops/second total: 10 open/close in 0.10 seconds: 101.07 ops/second total: 10 open/close in 0.09 seconds: 105.90 ops/second total: 10 open/close in 0.10 seconds: 104.70 ops/second total: 10 open/close in 0.09 seconds: 116.16 ops/second total: 10 open/close in 0.09 seconds: 114.30 ops/second total: 10 open/close in 0.09 seconds: 111.81 ops/second Stopping /mnt/lustre-mds1 (opts:) on oleg329-server e2fsck -d -v -t -t -f -n /dev/mapper/mds1_flakey -m8 oleg329-server: e2fsck 1.46.2.wc5 (26-Mar-2022) oleg329-server: Use max possible thread num: 2 instead Pass 1: Checking inodes, blocks, and sizes [Thread 0] Scan group range [0, 16) [Thread 1] Scan group range [16, 32) [Thread 0] jumping to group 0 [Thread 1] jumping to group 16 [Thread 1] e2fsck_pass1_run:2564: increase inode 512039 badness 0 to 2 for 10084 [Thread 1] group 17 finished [Thread 1] group 18 finished [Thread 1] group 19 finished [Thread 1] group 20 finished [Thread 1] group 21 finished [Thread 1] group 22 finished [Thread 1] group 23 finished [Thread 1] group 24 finished [Thread 1] group 25 finished [Thread 1] group 26 finished [Thread 1] group 27 finished [Thread 1] group 28 finished [Thread 1] group 29 finished [Thread 1] group 30 finished [Thread 1] group 31 finished [Thread 1] group 32 finished [Thread 1] Pass 1: Memory used: 740k/0k (508k/233k), time: 0.00/ 0.00/ 0.00 [Thread 1] Pass 1: I/O read: 1MB, write: 0MB, rate: 401.93MB/s [Thread 1] Scanned group range [16, 32), inodes 238 [Thread 0] e2fsck_pass1_run:2564: increase inode 78 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 79 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 80 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 81 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 82 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 83 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 84 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 85 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 86 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 87 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 88 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 89 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 90 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 91 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 92 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 93 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 94 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 95 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 96 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 97 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 98 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 99 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 100 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 101 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 102 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 103 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 104 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 105 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 106 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 107 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 108 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 109 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 110 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 111 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 112 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 113 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 114 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 115 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 116 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 117 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 118 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 119 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 120 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 121 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 122 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 123 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 124 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 125 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 126 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 127 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 128 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 129 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 130 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 131 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 132 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 133 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 134 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 135 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 136 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 137 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 138 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 139 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 140 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 141 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 142 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 143 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 144 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 145 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 146 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 147 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 148 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 149 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 150 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 151 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 152 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 153 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 154 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 155 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 156 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 157 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 158 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 160 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 164 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 165 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 183 badness 0 to 2 for 10084 [Thread 0] group 1 finished [Thread 0] e2fsck_pass1_run:2564: increase inode 32001 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 32002 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 32003 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 32004 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 32005 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 32007 badness 0 to 2 for 10084 [Thread 0] group 2 finished [Thread 0] e2fsck_pass1_run:2564: increase inode 64001 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 64002 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 64003 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 64004 badness 0 to 2 for 10084 [Thread 0] group 3 finished [Thread 0] group 4 finished [Thread 0] group 5 finished [Thread 0] group 6 finished [Thread 0] group 7 finished [Thread 0] group 8 finished [Thread 0] group 9 finished [Thread 0] group 10 finished [Thread 0] group 11 finished [Thread 0] group 12 finished [Thread 0] group 13 finished [Thread 0] group 14 finished [Thread 0] group 15 finished [Thread 0] group 16 finished [Thread 0] Pass 1: Memory used: 820k/0k (502k/319k), time: 0.01/ 0.01/ 0.00 [Thread 0] Pass 1: I/O read: 3MB, write: 0MB, rate: 248.61MB/s [Thread 0] Scanned group range [0, 16), inodes 733 Pass 2: Checking directory structure Pass 2: Memory used: 820k/0k (82k/739k), time: 0.01/ 0.00/ 0.00 Pass 2: I/O read: 2MB, write: 0MB, rate: 257.40MB/s Pass 3: Checking directory connectivity Peak memory: Memory used: 820k/0k (82k/739k), time: 0.06/ 0.05/ 0.00 Pass 3: Memory used: 820k/0k (79k/742k), time: 0.00/ 0.00/ 0.00 Pass 3: I/O read: 0MB, write: 0MB, rate: 0.00MB/s Pass 4: Checking reference counts Pass 4: Memory used: 820k/0k (73k/748k), time: 0.03/ 0.03/ 0.00 Pass 4: I/O read: 0MB, write: 0MB, rate: 0.00MB/s Pass 5: Checking group summary information Pass 5: Memory used: 820k/0k (71k/750k), time: 0.01/ 0.01/ 0.00 Pass 5: I/O read: 1MB, write: 0MB, rate: 154.56MB/s 456 inodes used (0.04%, out of 1024000) 7 non-contiguous files (1.5%) 1 non-contiguous directory (0.2%) # of inodes with ind/dind/tind blocks: 2/0/0 284293 blocks used (44.42%, out of 640000) 0 bad blocks 1 large file 233 regular files 206 directories 0 character device files 0 block device files 0 fifos 0 links 7 symbolic links (3 fast symbolic links) 0 sockets ------------ 446 files Memory used: 820k/0k (70k/751k), time: 0.09/ 0.08/ 0.00 I/O read: 2MB, write: 0MB, rate: 21.75MB/s Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg329-server: oleg329-server.virtnet: executing set_default_debug all all pdsh@oleg329-client: oleg329-server: ssh exited with exit code 1 Started lustre-MDT0000 Stopping /mnt/lustre-mds2 (opts:) on oleg329-server e2fsck -d -v -t -t -f -n /dev/mapper/mds2_flakey -m8 oleg329-server: e2fsck 1.46.2.wc5 (26-Mar-2022) oleg329-server: Use max possible thread num: 2 instead Pass 1: Checking inodes, blocks, and sizes [Thread 0] Scan group range [0, 16) [Thread 1] Scan group range [16, 32) [Thread 0] jumping to group 0 [Thread 1] jumping to group 16 [Thread 1] group 17 finished [Thread 1] group 18 finished [Thread 1] group 19 finished [Thread 1] group 20 finished [Thread 1] group 21 finished [Thread 1] group 22 finished [Thread 1] group 23 finished [Thread 1] group 24 finished [Thread 1] group 25 finished [Thread 1] group 26 finished [Thread 1] group 27 finished [Thread 1] group 28 finished [Thread 1] group 29 finished [Thread 1] group 30 finished [Thread 1] group 31 finished [Thread 1] group 32 finished [Thread 1] Pass 1: Memory used: 672k/0k (432k/241k), time: 0.00/ 0.00/ 0.00 [Thread 1] Pass 1: I/O read: 1MB, write: 0MB, rate: 369.96MB/s [Thread 1] Scanned group range [16, 32), inodes 649 [Thread 0] e2fsck_pass1_run:2564: increase inode 78 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 79 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 80 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 81 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 82 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 83 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 84 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 85 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 86 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 87 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 88 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 89 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 90 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 91 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 92 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 93 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 94 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 95 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 96 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 97 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 98 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 99 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 100 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 101 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 102 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 103 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 104 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 105 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 106 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 107 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 108 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 109 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 110 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 111 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 112 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 113 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 114 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 115 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 116 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 117 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 118 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 119 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 120 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 121 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 122 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 123 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 124 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 125 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 126 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 127 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 128 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 129 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 130 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 131 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 132 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 133 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 134 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 135 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 136 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 137 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 138 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 139 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 140 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 141 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 142 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 143 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 144 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 145 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 146 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 148 badness 0 to 2 for 10084 [Thread 0] group 1 finished [Thread 0] e2fsck_pass1_run:2564: increase inode 32001 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 32002 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 32003 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 32004 badness 0 to 2 for 10084 [Thread 0] group 2 finished [Thread 0] group 3 finished [Thread 0] group 4 finished [Thread 0] group 5 finished [Thread 0] group 6 finished [Thread 0] group 7 finished [Thread 0] group 8 finished [Thread 0] group 9 finished [Thread 0] group 10 finished [Thread 0] group 11 finished [Thread 0] group 12 finished [Thread 0] group 13 finished [Thread 0] group 14 finished [Thread 0] group 15 finished [Thread 0] group 16 finished [Thread 0] Pass 1: Memory used: 744k/0k (423k/322k), time: 0.01/ 0.01/ 0.00 [Thread 0] Pass 1: I/O read: 2MB, write: 0MB, rate: 206.33MB/s [Thread 0] Scanned group range [0, 16), inodes 215 Pass 2: Checking directory structure Pass 2: Memory used: 744k/0k (83k/662k), time: 0.01/ 0.00/ 0.00 Pass 2: I/O read: 2MB, write: 0MB, rate: 207.02MB/s Pass 3: Checking directory connectivity Peak memory: Memory used: 744k/0k (83k/662k), time: 0.06/ 0.04/ 0.00 Pass 3: Memory used: 744k/0k (80k/665k), time: 0.00/ 0.00/ 0.00 Pass 3: I/O read: 0MB, write: 0MB, rate: 0.00MB/s Pass 4: Checking reference counts Pass 4: Memory used: 744k/0k (73k/672k), time: 0.03/ 0.03/ 0.00 Pass 4: I/O read: 0MB, write: 0MB, rate: 0.00MB/s Pass 5: Checking group summary information Pass 5: Memory used: 744k/0k (72k/673k), time: 0.01/ 0.01/ 0.00 Pass 5: I/O read: 1MB, write: 0MB, rate: 124.98MB/s 417 inodes used (0.04%, out of 1024000) 3 non-contiguous files (0.7%) 1 non-contiguous directory (0.2%) # of inodes with ind/dind/tind blocks: 0/0/0 284240 blocks used (44.41%, out of 640000) 0 bad blocks 1 large file 173 regular files 223 directories 0 character device files 0 block device files 0 fifos 0 links 11 symbolic links (6 fast symbolic links) 0 sockets ------------ 407 files Memory used: 744k/0k (70k/675k), time: 0.10/ 0.08/ 0.00 I/O read: 2MB, write: 0MB, rate: 20.85MB/s Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg329-server: oleg329-server.virtnet: executing set_default_debug all all pdsh@oleg329-client: oleg329-server: ssh exited with exit code 1 Started lustre-MDT0001 Stopping /mnt/lustre-mds1 (opts:) on oleg329-server e2fsck -d -v -t -t -f -n /dev/mapper/mds1_flakey -m8 oleg329-server: e2fsck 1.46.2.wc5 (26-Mar-2022) oleg329-server: Use max possible thread num: 2 instead Pass 1: Checking inodes, blocks, and sizes [Thread 0] Scan group range [0, 16) [Thread 1] Scan group range [16, 32) [Thread 1] jumping to group 16 [Thread 0] jumping to group 0 [Thread 1] e2fsck_pass1_run:2564: increase inode 512039 badness 0 to 2 for 10084 [Thread 1] group 17 finished [Thread 1] group 18 finished [Thread 1] group 19 finished [Thread 1] group 20 finished [Thread 1] group 21 finished [Thread 1] group 22 finished [Thread 1] group 23 finished [Thread 1] group 24 finished [Thread 1] group 25 finished [Thread 1] group 26 finished [Thread 1] group 27 finished [Thread 1] group 28 finished [Thread 1] group 29 finished [Thread 1] group 30 finished [Thread 1] group 31 finished [Thread 1] group 32 finished [Thread 1] Pass 1: Memory used: 740k/0k (505k/236k), time: 0.00/ 0.00/ 0.00 [Thread 1] Pass 1: I/O read: 1MB, write: 0MB, rate: 491.64MB/s [Thread 1] Scanned group range [16, 32), inodes 238 [Thread 0] e2fsck_pass1_run:2564: increase inode 78 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 79 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 80 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 81 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 82 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 83 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 84 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 85 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 86 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 87 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 88 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 89 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 90 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 91 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 92 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 93 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 94 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 95 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 96 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 97 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 98 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 99 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 100 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 101 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 102 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 103 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 104 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 105 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 106 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 107 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 108 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 109 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 110 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 111 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 112 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 113 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 114 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 115 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 116 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 117 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 118 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 119 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 120 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 121 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 122 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 123 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 124 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 125 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 126 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 127 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 128 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 129 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 130 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 131 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 132 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 133 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 134 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 135 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 136 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 137 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 138 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 139 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 140 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 141 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 142 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 143 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 144 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 145 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 146 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 147 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 148 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 149 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 150 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 151 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 152 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 153 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 154 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 155 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 156 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 157 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 158 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 159 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 160 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 161 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 164 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 165 badness 0 to 2 for 10084 [Thread 0] group 1 finished [Thread 0] e2fsck_pass1_run:2564: increase inode 32001 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 32002 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 32003 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 32004 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 32005 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 32007 badness 0 to 2 for 10084 [Thread 0] group 2 finished [Thread 0] e2fsck_pass1_run:2564: increase inode 64001 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 64002 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 64003 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 64004 badness 0 to 2 for 10084 [Thread 0] group 3 finished [Thread 0] group 4 finished [Thread 0] group 5 finished [Thread 0] group 6 finished [Thread 0] group 7 finished [Thread 0] group 8 finished [Thread 0] group 9 finished [Thread 0] group 10 finished [Thread 0] group 11 finished [Thread 0] group 12 finished [Thread 0] group 13 finished [Thread 0] group 14 finished [Thread 0] group 15 finished [Thread 0] group 16 finished [Thread 0] Pass 1: Memory used: 816k/0k (499k/318k), time: 0.01/ 0.01/ 0.00 [Thread 0] Pass 1: I/O read: 3MB, write: 0MB, rate: 327.26MB/s [Thread 0] Scanned group range [0, 16), inodes 733 Pass 2: Checking directory structure Pass 2: Memory used: 816k/0k (80k/737k), time: 0.01/ 0.00/ 0.00 Pass 2: I/O read: 1MB, write: 0MB, rate: 134.63MB/s Pass 3: Checking directory connectivity Peak memory: Memory used: 816k/0k (80k/737k), time: 0.06/ 0.04/ 0.01 Pass 3: Memory used: 816k/0k (77k/740k), time: 0.00/ 0.00/ 0.00 Pass 3: I/O read: 0MB, write: 0MB, rate: 0.00MB/s Pass 4: Checking reference counts Pass 4: Memory used: 816k/0k (73k/744k), time: 0.03/ 0.03/ 0.00 Pass 4: I/O read: 0MB, write: 0MB, rate: 0.00MB/s Pass 5: Checking group summary information Pass 5: Memory used: 816k/0k (71k/746k), time: 0.01/ 0.01/ 0.00 Pass 5: I/O read: 1MB, write: 0MB, rate: 151.70MB/s 372 inodes used (0.04%, out of 1024000) 6 non-contiguous files (1.6%) 1 non-contiguous directory (0.3%) # of inodes with ind/dind/tind blocks: 0/0/0 284251 blocks used (44.41%, out of 640000) 0 bad blocks 1 large file 164 regular files 191 directories 0 character device files 0 block device files 0 fifos 0 links 7 symbolic links (3 fast symbolic links) 0 sockets ------------ 362 files Memory used: 816k/0k (70k/747k), time: 0.09/ 0.08/ 0.01 I/O read: 2MB, write: 0MB, rate: 22.28MB/s Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg329-server: oleg329-server.virtnet: executing set_default_debug all all pdsh@oleg329-client: oleg329-server: ssh exited with exit code 1 Started lustre-MDT0000 Stopping /mnt/lustre-mds2 (opts:) on oleg329-server e2fsck -d -v -t -t -f -n /dev/mapper/mds2_flakey -m8 oleg329-server: e2fsck 1.46.2.wc5 (26-Mar-2022) oleg329-server: Use max possible thread num: 2 instead Pass 1: Checking inodes, blocks, and sizes [Thread 0] Scan group range [0, 16) [Thread 1] Scan group range [16, 32) [Thread 0] jumping to group 0 [Thread 1] jumping to group 16 [Thread 1] group 17 finished [Thread 1] group 18 finished [Thread 1] group 19 finished [Thread 1] group 20 finished [Thread 1] group 21 finished [Thread 1] group 22 finished [Thread 1] group 23 finished [Thread 1] group 24 finished [Thread 1] group 25 finished [Thread 1] group 26 finished [Thread 1] group 27 finished [Thread 1] group 28 finished [Thread 1] group 29 finished [Thread 1] group 30 finished [Thread 1] group 31 finished [Thread 1] group 32 finished [Thread 1] Pass 1: Memory used: 672k/0k (428k/245k), time: 0.00/ 0.00/ 0.00 [Thread 1] Pass 1: I/O read: 1MB, write: 0MB, rate: 330.80MB/s [Thread 1] Scanned group range [16, 32), inodes 649 [Thread 0] e2fsck_pass1_run:2564: increase inode 78 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 79 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 80 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 81 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 82 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 83 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 84 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 85 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 86 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 87 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 88 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 89 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 90 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 91 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 92 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 93 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 94 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 95 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 96 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 97 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 98 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 99 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 100 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 101 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 102 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 103 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 104 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 105 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 106 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 107 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 108 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 109 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 110 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 111 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 112 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 113 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 114 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 115 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 116 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 117 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 118 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 119 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 120 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 121 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 122 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 123 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 124 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 125 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 126 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 127 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 128 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 129 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 130 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 131 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 132 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 133 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 134 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 135 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 136 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 137 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 138 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 139 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 140 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 141 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 142 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 143 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 144 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 145 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 146 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 148 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 150 badness 0 to 2 for 10084 [Thread 0] group 1 finished [Thread 0] e2fsck_pass1_run:2564: increase inode 32001 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 32002 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 32003 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 32004 badness 0 to 2 for 10084 [Thread 0] group 2 finished [Thread 0] group 3 finished [Thread 0] group 4 finished [Thread 0] group 5 finished [Thread 0] group 6 finished [Thread 0] group 7 finished [Thread 0] group 8 finished [Thread 0] group 9 finished [Thread 0] group 10 finished [Thread 0] group 11 finished [Thread 0] group 12 finished [Thread 0] group 13 finished [Thread 0] group 14 finished [Thread 0] group 15 finished [Thread 0] group 16 finished [Thread 0] Pass 1: Memory used: 740k/0k (419k/322k), time: 0.01/ 0.01/ 0.00 [Thread 0] Pass 1: I/O read: 2MB, write: 0MB, rate: 220.53MB/s [Thread 0] Scanned group range [0, 16), inodes 215 Pass 2: Checking directory structure Pass 2: Memory used: 740k/0k (81k/660k), time: 0.01/ 0.00/ 0.00 Pass 2: I/O read: 1MB, write: 0MB, rate: 144.18MB/s Pass 3: Checking directory connectivity Peak memory: Memory used: 740k/0k (81k/660k), time: 0.05/ 0.04/ 0.01 Pass 3: Memory used: 740k/0k (78k/663k), time: 0.00/ 0.00/ 0.00 Pass 3: I/O read: 0MB, write: 0MB, rate: 0.00MB/s Pass 4: Checking reference counts Pass 4: Memory used: 740k/0k (73k/668k), time: 0.03/ 0.03/ 0.00 Pass 4: I/O read: 0MB, write: 0MB, rate: 0.00MB/s Pass 5: Checking group summary information Pass 5: Memory used: 740k/0k (72k/669k), time: 0.01/ 0.01/ 0.00 Pass 5: I/O read: 1MB, write: 0MB, rate: 113.02MB/s 358 inodes used (0.03%, out of 1024000) 3 non-contiguous files (0.8%) 1 non-contiguous directory (0.3%) # of inodes with ind/dind/tind blocks: 0/0/0 284221 blocks used (44.41%, out of 640000) 0 bad blocks 1 large file 144 regular files 193 directories 0 character device files 0 block device files 0 fifos 0 links 11 symbolic links (6 fast symbolic links) 0 sockets ------------ 348 files Memory used: 740k/0k (70k/671k), time: 0.09/ 0.08/ 0.01 I/O read: 2MB, write: 0MB, rate: 21.69MB/s Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg329-server: oleg329-server.virtnet: executing set_default_debug all all pdsh@oleg329-client: oleg329-server: ssh exited with exit code 1 Started lustre-MDT0001 total: 10 open/close in 0.07 seconds: 143.62 ops/second total: 10 open/close in 0.07 seconds: 136.46 ops/second total: 10 open/close in 0.07 seconds: 138.55 ops/second total: 10 open/close in 0.07 seconds: 135.22 ops/second total: 10 open/close in 0.07 seconds: 144.87 ops/second total: 10 open/close in 0.08 seconds: 130.95 ops/second total: 10 open/close in 0.08 seconds: 130.27 ops/second total: 10 open/close in 0.07 seconds: 135.79 ops/second total: 10 open/close in 0.08 seconds: 130.44 ops/second total: 10 open/close in 0.07 seconds: 137.49 ops/second Stopping /mnt/lustre-mds1 (opts:) on oleg329-server e2fsck -d -v -t -t -f -n /dev/mapper/mds1_flakey -m8 oleg329-server: e2fsck 1.46.2.wc5 (26-Mar-2022) oleg329-server: Use max possible thread num: 2 instead Pass 1: Checking inodes, blocks, and sizes [Thread 0] Scan group range [0, 16) [Thread 1] Scan group range [16, 32) [Thread 1] jumping to group 16 [Thread 0] jumping to group 0 [Thread 1] e2fsck_pass1_run:2564: increase inode 512039 badness 0 to 2 for 10084 [Thread 1] group 17 finished [Thread 1] group 18 finished [Thread 1] group 19 finished [Thread 1] group 20 finished [Thread 1] group 21 finished [Thread 1] group 22 finished [Thread 1] group 23 finished [Thread 1] group 24 finished [Thread 1] group 25 finished [Thread 1] group 26 finished [Thread 1] group 27 finished [Thread 1] group 28 finished [Thread 1] group 29 finished [Thread 1] group 30 finished [Thread 1] group 31 finished [Thread 1] group 32 finished [Thread 1] Pass 1: Memory used: 740k/0k (506k/235k), time: 0.00/ 0.00/ 0.00 [Thread 1] Pass 1: I/O read: 1MB, write: 0MB, rate: 561.48MB/s [Thread 1] Scanned group range [16, 32), inodes 238 [Thread 0] e2fsck_pass1_run:2564: increase inode 78 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 79 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 80 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 81 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 82 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 83 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 84 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 85 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 86 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 87 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 88 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 89 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 90 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 91 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 92 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 93 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 94 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 95 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 96 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 97 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 98 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 99 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 100 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 101 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 102 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 103 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 104 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 105 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 106 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 107 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 108 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 109 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 110 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 111 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 112 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 113 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 114 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 115 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 116 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 117 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 118 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 119 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 120 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 121 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 122 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 123 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 124 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 125 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 126 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 127 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 128 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 129 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 130 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 131 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 132 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 133 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 134 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 135 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 136 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 137 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 138 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 139 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 140 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 141 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 142 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 143 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 144 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 145 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 146 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 147 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 148 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 149 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 150 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 151 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 152 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 153 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 154 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 155 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 156 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 157 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 158 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 159 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 160 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 161 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 164 badness 0 to 2 for 10084 [Thread 0] group 1 finished [Thread 0] e2fsck_pass1_run:2564: increase inode 32001 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 32002 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 32003 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 32004 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 32005 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 32007 badness 0 to 2 for 10084 [Thread 0] group 2 finished [Thread 0] e2fsck_pass1_run:2564: increase inode 64001 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 64002 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 64003 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 64004 badness 0 to 2 for 10084 [Thread 0] group 3 finished [Thread 0] group 4 finished [Thread 0] group 5 finished [Thread 0] group 6 finished [Thread 0] group 7 finished [Thread 0] group 8 finished [Thread 0] group 9 finished [Thread 0] group 10 finished [Thread 0] group 11 finished [Thread 0] group 12 finished [Thread 0] group 13 finished [Thread 0] group 14 finished [Thread 0] group 15 finished [Thread 0] group 16 finished [Thread 0] Pass 1: Memory used: 816k/0k (500k/317k), time: 0.01/ 0.01/ 0.00 [Thread 0] Pass 1: I/O read: 3MB, write: 0MB, rate: 324.99MB/s [Thread 0] Scanned group range [0, 16), inodes 733 Pass 2: Checking directory structure Pass 2: Memory used: 816k/0k (80k/737k), time: 0.01/ 0.00/ 0.00 Pass 2: I/O read: 1MB, write: 0MB, rate: 197.59MB/s Pass 3: Checking directory connectivity Peak memory: Memory used: 816k/0k (80k/737k), time: 0.05/ 0.04/ 0.00 Pass 3: Memory used: 816k/0k (78k/739k), time: 0.00/ 0.00/ 0.00 Pass 3: I/O read: 0MB, write: 0MB, rate: 0.00MB/s Pass 4: Checking reference counts Pass 4: Memory used: 816k/0k (73k/744k), time: 0.02/ 0.02/ 0.00 Pass 4: I/O read: 0MB, write: 0MB, rate: 0.00MB/s Pass 5: Checking group summary information Pass 5: Memory used: 816k/0k (72k/745k), time: 0.01/ 0.01/ 0.00 Pass 5: I/O read: 1MB, write: 0MB, rate: 176.87MB/s 376 inodes used (0.04%, out of 1024000) 7 non-contiguous files (1.9%) 1 non-contiguous directory (0.3%) # of inodes with ind/dind/tind blocks: 1/0/0 284274 blocks used (44.42%, out of 640000) 0 bad blocks 1 large file 163 regular files 196 directories 0 character device files 0 block device files 0 fifos 0 links 7 symbolic links (3 fast symbolic links) 0 sockets ------------ 366 files Memory used: 816k/0k (70k/747k), time: 0.08/ 0.07/ 0.00 I/O read: 2MB, write: 0MB, rate: 23.84MB/s Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg329-server: oleg329-server.virtnet: executing set_default_debug all all pdsh@oleg329-client: oleg329-server: ssh exited with exit code 1 Started lustre-MDT0000 Stopping /mnt/lustre-mds2 (opts:) on oleg329-server e2fsck -d -v -t -t -f -n /dev/mapper/mds2_flakey -m8 oleg329-server: e2fsck 1.46.2.wc5 (26-Mar-2022) oleg329-server: Use max possible thread num: 2 instead Pass 1: Checking inodes, blocks, and sizes [Thread 0] Scan group range [0, 16) [Thread 1] Scan group range [16, 32) [Thread 1] jumping to group 16 [Thread 0] jumping to group 0 [Thread 1] group 17 finished [Thread 1] group 18 finished [Thread 1] group 19 finished [Thread 1] group 20 finished [Thread 1] group 21 finished [Thread 1] group 22 finished [Thread 1] group 23 finished [Thread 1] group 24 finished [Thread 1] group 25 finished [Thread 1] group 26 finished [Thread 1] group 27 finished [Thread 1] group 28 finished [Thread 1] group 29 finished [Thread 1] group 30 finished [Thread 1] group 31 finished [Thread 1] group 32 finished [Thread 1] Pass 1: Memory used: 672k/0k (430k/243k), time: 0.00/ 0.00/ 0.00 [Thread 1] Pass 1: I/O read: 1MB, write: 0MB, rate: 381.83MB/s [Thread 1] Scanned group range [16, 32), inodes 649 [Thread 0] e2fsck_pass1_run:2564: increase inode 78 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 79 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 80 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 81 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 82 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 83 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 84 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 85 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 86 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 87 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 88 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 89 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 90 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 91 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 92 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 93 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 94 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 95 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 96 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 97 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 98 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 99 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 100 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 101 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 102 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 103 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 104 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 105 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 106 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 107 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 108 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 109 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 110 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 111 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 112 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 113 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 114 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 115 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 116 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 117 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 118 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 119 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 120 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 121 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 122 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 123 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 124 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 125 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 126 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 127 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 128 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 129 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 130 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 131 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 132 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 133 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 134 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 135 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 136 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 137 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 138 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 139 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 140 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 141 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 142 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 143 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 144 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 145 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 146 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 148 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 150 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 151 badness 0 to 2 for 10084 [Thread 0] group 1 finished [Thread 0] e2fsck_pass1_run:2564: increase inode 32001 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 32002 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 32003 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 32004 badness 0 to 2 for 10084 [Thread 0] group 2 finished [Thread 0] group 3 finished [Thread 0] group 4 finished [Thread 0] group 5 finished [Thread 0] group 6 finished [Thread 0] group 7 finished [Thread 0] group 8 finished [Thread 0] group 9 finished [Thread 0] group 10 finished [Thread 0] group 11 finished [Thread 0] group 12 finished [Thread 0] group 13 finished [Thread 0] group 14 finished [Thread 0] group 15 finished [Thread 0] group 16 finished [Thread 0] Pass 1: Memory used: 744k/0k (422k/323k), time: 0.01/ 0.01/ 0.00 [Thread 0] Pass 1: I/O read: 2MB, write: 0MB, rate: 191.50MB/s [Thread 0] Scanned group range [0, 16), inodes 271 Pass 2: Checking directory structure Pass 2: Memory used: 744k/0k (82k/663k), time: 0.01/ 0.00/ 0.00 Pass 2: I/O read: 2MB, write: 0MB, rate: 248.63MB/s Pass 3: Checking directory connectivity Peak memory: Memory used: 744k/0k (82k/663k), time: 0.06/ 0.05/ 0.00 Pass 3: Memory used: 744k/0k (80k/665k), time: 0.00/ 0.00/ 0.00 Pass 3: I/O read: 0MB, write: 0MB, rate: 0.00MB/s Pass 4: Checking reference counts Pass 4: Memory used: 744k/0k (73k/672k), time: 0.03/ 0.03/ 0.00 Pass 4: I/O read: 0MB, write: 0MB, rate: 0.00MB/s Pass 5: Checking group summary information Pass 5: Memory used: 744k/0k (72k/673k), time: 0.01/ 0.01/ 0.00 Pass 5: I/O read: 1MB, write: 0MB, rate: 105.30MB/s 479 inodes used (0.05%, out of 1024000) 4 non-contiguous files (0.8%) 1 non-contiguous directory (0.2%) # of inodes with ind/dind/tind blocks: 1/0/0 284266 blocks used (44.42%, out of 640000) 0 bad blocks 1 large file 245 regular files 213 directories 0 character device files 0 block device files 0 fifos 0 links 11 symbolic links (6 fast symbolic links) 0 sockets ------------ 469 files Memory used: 744k/0k (71k/674k), time: 0.10/ 0.08/ 0.00 I/O read: 2MB, write: 0MB, rate: 20.58MB/s Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg329-server: oleg329-server.virtnet: executing set_default_debug all all pdsh@oleg329-client: oleg329-server: ssh exited with exit code 1 Started lustre-MDT0001 Stopping /mnt/lustre-mds1 (opts:) on oleg329-server e2fsck -d -v -t -t -f -n /dev/mapper/mds1_flakey -m8 oleg329-server: e2fsck 1.46.2.wc5 (26-Mar-2022) oleg329-server: Use max possible thread num: 2 instead Pass 1: Checking inodes, blocks, and sizes [Thread 0] Scan group range [0, 16) [Thread 1] Scan group range [16, 32) [Thread 0] jumping to group 0 [Thread 1] jumping to group 16 [Thread 1] e2fsck_pass1_run:2564: increase inode 512039 badness 0 to 2 for 10084 [Thread 1] group 17 finished [Thread 1] group 18 finished [Thread 1] group 19 finished [Thread 1] group 20 finished [Thread 1] group 21 finished [Thread 1] group 22 finished [Thread 1] group 23 finished [Thread 1] group 24 finished [Thread 1] group 25 finished [Thread 1] group 26 finished [Thread 1] group 27 finished [Thread 1] group 28 finished [Thread 1] group 29 finished [Thread 1] group 30 finished [Thread 1] group 31 finished [Thread 1] group 32 finished [Thread 1] Pass 1: Memory used: 740k/0k (505k/236k), time: 0.00/ 0.00/ 0.00 [Thread 1] Pass 1: I/O read: 1MB, write: 0MB, rate: 558.97MB/s [Thread 1] Scanned group range [16, 32), inodes 238 [Thread 0] e2fsck_pass1_run:2564: increase inode 78 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 79 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 80 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 81 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 82 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 83 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 84 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 85 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 86 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 87 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 88 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 89 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 90 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 91 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 92 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 93 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 94 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 95 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 96 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 97 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 98 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 99 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 100 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 101 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 102 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 103 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 104 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 105 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 106 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 107 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 108 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 109 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 110 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 111 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 112 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 113 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 114 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 115 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 116 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 117 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 118 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 119 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 120 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 121 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 122 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 123 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 124 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 125 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 126 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 127 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 128 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 129 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 130 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 131 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 132 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 133 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 134 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 135 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 136 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 137 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 138 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 139 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 140 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 141 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 142 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 143 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 144 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 145 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 146 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 147 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 148 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 149 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 150 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 151 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 152 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 153 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 154 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 155 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 156 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 157 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 158 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 159 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 160 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 161 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 164 badness 0 to 2 for 10084 [Thread 0] group 1 finished [Thread 0] e2fsck_pass1_run:2564: increase inode 32001 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 32002 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 32003 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 32004 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 32005 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 32007 badness 0 to 2 for 10084 [Thread 0] group 2 finished [Thread 0] e2fsck_pass1_run:2564: increase inode 64001 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 64002 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 64003 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 64004 badness 0 to 2 for 10084 [Thread 0] group 3 finished [Thread 0] group 4 finished [Thread 0] group 5 finished [Thread 0] group 6 finished [Thread 0] group 7 finished [Thread 0] group 8 finished [Thread 0] group 9 finished [Thread 0] group 10 finished [Thread 0] group 11 finished [Thread 0] group 12 finished [Thread 0] group 13 finished [Thread 0] group 14 finished [Thread 0] group 15 finished [Thread 0] group 16 finished [Thread 0] Pass 1: Memory used: 816k/0k (500k/317k), time: 0.01/ 0.01/ 0.00 [Thread 0] Pass 1: I/O read: 3MB, write: 0MB, rate: 287.88MB/s [Thread 0] Scanned group range [0, 16), inodes 733 Pass 2: Checking directory structure Pass 2: Memory used: 816k/0k (80k/737k), time: 0.01/ 0.00/ 0.00 Pass 2: I/O read: 1MB, write: 0MB, rate: 153.19MB/s Pass 3: Checking directory connectivity Peak memory: Memory used: 816k/0k (80k/737k), time: 0.06/ 0.04/ 0.01 Pass 3: Memory used: 816k/0k (77k/740k), time: 0.00/ 0.00/ 0.00 Pass 3: I/O read: 0MB, write: 0MB, rate: 0.00MB/s Pass 4: Checking reference counts Pass 4: Memory used: 816k/0k (73k/744k), time: 0.03/ 0.02/ 0.00 Pass 4: I/O read: 0MB, write: 0MB, rate: 0.00MB/s Pass 5: Checking group summary information Pass 5: Memory used: 816k/0k (72k/745k), time: 0.01/ 0.01/ 0.00 Pass 5: I/O read: 1MB, write: 0MB, rate: 147.71MB/s 371 inodes used (0.04%, out of 1024000) 6 non-contiguous files (1.6%) 1 non-contiguous directory (0.3%) # of inodes with ind/dind/tind blocks: 1/0/0 284267 blocks used (44.42%, out of 640000) 0 bad blocks 1 large file 163 regular files 191 directories 0 character device files 0 block device files 0 fifos 0 links 7 symbolic links (3 fast symbolic links) 0 sockets ------------ 361 files Memory used: 816k/0k (70k/747k), time: 0.09/ 0.07/ 0.01 I/O read: 2MB, write: 0MB, rate: 21.99MB/s Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg329-server: oleg329-server.virtnet: executing set_default_debug all all pdsh@oleg329-client: oleg329-server: ssh exited with exit code 1 Started lustre-MDT0000 Stopping /mnt/lustre-mds2 (opts:) on oleg329-server e2fsck -d -v -t -t -f -n /dev/mapper/mds2_flakey -m8 oleg329-server: e2fsck 1.46.2.wc5 (26-Mar-2022) oleg329-server: Use max possible thread num: 2 instead Pass 1: Checking inodes, blocks, and sizes [Thread 0] Scan group range [0, 16) [Thread 1] Scan group range [16, 32) [Thread 0] jumping to group 0 [Thread 1] jumping to group 16 [Thread 1] group 17 finished [Thread 1] group 18 finished [Thread 1] group 19 finished [Thread 1] group 20 finished [Thread 1] group 21 finished [Thread 1] group 22 finished [Thread 1] group 23 finished [Thread 1] group 24 finished [Thread 1] group 25 finished [Thread 1] group 26 finished [Thread 1] group 27 finished [Thread 1] group 28 finished [Thread 1] group 29 finished [Thread 1] group 30 finished [Thread 1] group 31 finished [Thread 1] group 32 finished [Thread 1] Pass 1: Memory used: 672k/0k (428k/245k), time: 0.00/ 0.00/ 0.00 [Thread 1] Pass 1: I/O read: 1MB, write: 0MB, rate: 465.12MB/s [Thread 1] Scanned group range [16, 32), inodes 649 [Thread 0] e2fsck_pass1_run:2564: increase inode 78 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 79 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 80 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 81 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 82 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 83 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 84 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 85 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 86 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 87 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 88 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 89 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 90 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 91 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 92 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 93 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 94 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 95 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 96 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 97 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 98 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 99 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 100 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 101 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 102 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 103 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 104 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 105 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 106 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 107 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 108 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 109 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 110 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 111 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 112 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 113 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 114 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 115 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 116 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 117 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 118 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 119 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 120 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 121 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 122 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 123 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 124 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 125 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 126 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 127 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 128 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 129 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 130 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 131 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 132 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 133 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 134 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 135 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 136 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 137 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 138 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 139 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 140 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 141 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 142 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 143 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 144 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 145 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 146 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 147 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 148 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 149 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 150 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 151 badness 0 to 2 for 10084 [Thread 0] group 1 finished [Thread 0] e2fsck_pass1_run:2564: increase inode 32001 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 32002 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 32003 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 32004 badness 0 to 2 for 10084 [Thread 0] group 2 finished [Thread 0] group 3 finished [Thread 0] group 4 finished [Thread 0] group 5 finished [Thread 0] group 6 finished [Thread 0] group 7 finished [Thread 0] group 8 finished [Thread 0] group 9 finished [Thread 0] group 10 finished [Thread 0] group 11 finished [Thread 0] group 12 finished [Thread 0] group 13 finished [Thread 0] group 14 finished [Thread 0] group 15 finished [Thread 0] group 16 finished [Thread 0] Pass 1: Memory used: 740k/0k (419k/322k), time: 0.01/ 0.01/ 0.00 [Thread 0] Pass 1: I/O read: 2MB, write: 0MB, rate: 241.20MB/s [Thread 0] Scanned group range [0, 16), inodes 271 Pass 2: Checking directory structure Pass 2: Memory used: 740k/0k (81k/660k), time: 0.01/ 0.00/ 0.00 Pass 2: I/O read: 1MB, write: 0MB, rate: 192.94MB/s Pass 3: Checking directory connectivity Peak memory: Memory used: 740k/0k (81k/660k), time: 0.05/ 0.04/ 0.01 Pass 3: Memory used: 740k/0k (78k/663k), time: 0.00/ 0.00/ 0.00 Pass 3: I/O read: 0MB, write: 0MB, rate: 0.00MB/s Pass 4: Checking reference counts Pass 4: Memory used: 740k/0k (73k/668k), time: 0.02/ 0.02/ 0.00 Pass 4: I/O read: 0MB, write: 0MB, rate: 0.00MB/s Pass 5: Checking group summary information Pass 5: Memory used: 740k/0k (72k/669k), time: 0.01/ 0.01/ 0.00 Pass 5: I/O read: 1MB, write: 0MB, rate: 118.75MB/s 361 inodes used (0.04%, out of 1024000) 3 non-contiguous files (0.8%) 1 non-contiguous directory (0.3%) # of inodes with ind/dind/tind blocks: 1/0/0 284249 blocks used (44.41%, out of 640000) 0 bad blocks 1 large file 147 regular files 193 directories 0 character device files 0 block device files 0 fifos 0 links 11 symbolic links (6 fast symbolic links) 0 sockets ------------ 351 files Memory used: 740k/0k (71k/670k), time: 0.09/ 0.07/ 0.01 I/O read: 2MB, write: 0MB, rate: 22.85MB/s Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg329-server: oleg329-server.virtnet: executing set_default_debug all all pdsh@oleg329-client: oleg329-server: ssh exited with exit code 1 Started lustre-MDT0001 PASS 17n (74s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 17o: stat file with incompat LMA feature == 20:20:51 (1713486051) striped dir -i1 -c2 -H fnv_1a_64 /mnt/lustre/d17o.sanityo Failing mds2 on oleg329-server Stopping /mnt/lustre-mds2 (opts:) on oleg329-server 20:20:59 (1713486059) shut down Failover mds2 to oleg329-server mount facets: mds2 Starting mds2: -o localrecov /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg329-server: oleg329-server.virtnet: executing set_default_debug all all pdsh@oleg329-client: oleg329-server: ssh exited with exit code 1 Started lustre-MDT0001 20:21:13 (1713486073) targets are mounted 20:21:13 (1713486073) facet_failover done oleg329-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0001-mdc-*.mds_server_uuid mdc.lustre-MDT0001-mdc-*.mds_server_uuid in FULL state after 0 sec fail_loc=0x194 ls: cannot access /mnt/lustre/d17o.sanityo/f17o.sanity: Operation not supported fail_loc=0 PASS 17o (32s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 18: touch .../f ; ls ... ======================================================================================== 20:21:26 (1713486086) d10.sanity d11.sanity d12.sanity d13.sanity d14.sanity d15.sanity d16.sanity d17a.sanity d17b.sanity d17c.sanity d17d.sanity d17e.sanity d17f.sanity d17g.sanity d17h.sanity d17i.sanity d17k.sanity d17k.sanity.new d17l.sanity d17m.sanity d17n.sanity d17o.sanityo d5.sanity d6g.sanity d6g.sanity.local d7a.sanity d7b.sanity d8.sanity d9.sanity f0d.sanity.export f0d.sanity.import f18.sanity f6a.sanity f6c.sanity f6e.sanity f6h.sanity PASS 18 (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 19a: touch .../f19 ; ls -l ... ; rm .../f19 ===================================================================== 20:21:29 (1713486089) total 292 drwxr-xr-x 3 root root 8192 Apr 18 20:17 d10.sanity drwxr-xr-x 3 root root 8192 Apr 18 20:18 d11.sanity drwxr-xr-x 2 root root 8192 Apr 18 20:18 d12.sanity drwxr-xr-x 2 root root 8192 Apr 18 20:18 d13.sanity drwxr-xr-x 2 root root 8192 Apr 18 20:18 d14.sanity drwxr-xr-x 2 root root 8192 Apr 18 20:18 d15.sanity drwxr-xr-x 2 root root 8192 Apr 18 20:18 d16.sanity drwxr-xr-x 2 root root 8192 Apr 18 20:18 d17a.sanity drwxr-xr-x 2 root root 8192 Apr 18 20:18 d17b.sanity drwxr-xr-x 2 root root 8192 Apr 18 20:18 d17c.sanity drwxr-xr-x 2 root root 8192 Apr 18 20:18 d17d.sanity drwxr-xr-x 2 root root 8192 Apr 18 20:18 d17e.sanity drwxr-xr-x 2 root root 8192 Apr 18 20:18 d17f.sanity drwxr-xr-x 2 root root 8192 Apr 18 20:18 d17g.sanity drwxr-xr-x 2 root root 8192 Apr 18 20:18 d17h.sanity drwxr-xr-x 2 root root 4096 Apr 18 20:18 d17i.sanity drwxr-xr-x 2 root root 8192 Apr 18 20:18 d17k.sanity drwxr-xr-x 2 root root 8192 Apr 18 20:18 d17k.sanity.new drwxr-xr-x 2 root root 8192 Apr 18 20:18 d17l.sanity drwxr-xr-x 2 root root 69632 Apr 18 20:19 d17m.sanity drwxr-xr-x 2 root root 8192 Apr 18 20:20 d17n.sanity drwxr-xr-x 2 root root 8192 Apr 18 20:20 d17o.sanityo drwxr-xr-x 3 root root 8192 Apr 18 20:17 d5.sanity drwxrwxrwx 3 root root 8192 Apr 18 20:17 d6g.sanity drwxr-sr-x 3 root sanityusr 4096 Apr 18 20:17 d6g.sanity.local drwxr-xr-x 2 root root 8192 Apr 18 20:17 d7a.sanity drwxr-xr-x 2 root root 8192 Apr 18 20:17 d7b.sanity drwxr-xr-x 2 root root 8192 Apr 18 20:17 d8.sanity drwxr-xr-x 3 root root 8192 Apr 18 20:17 d9.sanity -rw-r--r-- 1 root root 381 Apr 18 20:16 f0d.sanity.export -rw-r--r-- 1 root root 716 Apr 18 20:16 f0d.sanity.import -rw-r--r-- 1 root root 0 Apr 18 20:21 f18.sanity -rw-r--r-- 1 root root 0 Apr 18 20:21 f19a.sanity -rw-rw-rw- 1 root root 0 Apr 18 20:17 f6a.sanity -rw-r--r-- 1 sanityusr root 0 Apr 18 20:17 f6c.sanity -rw-r--r-- 1 root sanityusr 0 Apr 18 20:17 f6e.sanity -rw-r--r-- 1 sanityusr sanityusr 0 Apr 18 20:17 f6h.sanity /mnt/lustre/f19a.sanity: absent OK PASS 19a (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 19b: ls -l .../f19 (should return error) ======================================================================== 20:21:33 (1713486093) ls: cannot access /mnt/lustre/f19b.sanity: No such file or directory PASS 19b (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 19c: runas -u 500 -g 500 touch .../f19 (should return error) ============================================================ 20:21:37 (1713486097) running as uid/gid/euid/egid 500/500/500/500, groups: [touch] [/mnt/lustre/f19c.sanity] touch: cannot touch '/mnt/lustre/f19c.sanity': Permission denied PASS 19c (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 19d: cat .../f19 (should return error) ======================================================================== 20:21:40 (1713486100) cat: /mnt/lustre/f19: No such file or directory PASS 19d (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 20: touch .../f ; ls -l ... =============== 20:21:43 (1713486103) /mnt/lustre/f20.sanity: absent OK PASS 20 (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 21: write to dangling link ================ 20:21:46 (1713486106) striped dir -i1 -c2 -H crush /mnt/lustre/d21.sanity foo /mnt/lustre/d21.sanity/link has type link OK /mnt/lustre/d21.sanity/link has type file OK PASS 21 (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 22: unpack tar archive as non-root user === 20:21:50 (1713486110) striped dir -i0 -c2 -H crush2 /mnt/lustre/d22.sanity running as uid/gid/euid/egid 500/500/500/500, groups: [tar] [cf] [-] [/etc/hosts] [/etc/sysconfig/network] running as uid/gid/euid/egid 500/500/500/500, groups: [tar] [xf] [-] tar: Removing leading `/' from member names /mnt/lustre/d22.sanity/etc: total 5 -rw-r--r-- 1 sanityusr sanityusr 159 Feb 8 2017 hosts drwxr-xr-x 2 sanityusr sanityusr 4096 Apr 18 20:21 sysconfig /mnt/lustre/d22.sanity/etc/sysconfig: total 1 -rw-r--r-- 1 sanityusr sanityusr 22 Jan 16 2022 network /mnt/lustre/d22.sanity/etc has type dir OK /mnt/lustre/d22.sanity/etc is owned by user #500 OK /mnt/lustre/d22.sanity/etc is owned by group #500 OK PASS 22 (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 23a: O_CREAT|O_EXCL in subdir ============= 20:21:53 (1713486113) striped dir -i1 -c2 -H crush2 /mnt/lustre/d23a.sanity Succeed in opening file "/mnt/lustre/d23a.sanity/f23a.sanity"(flags=O_CREAT) Error in opening file "/mnt/lustre/d23a.sanity/f23a.sanity"(flags=O_CREAT) 17: File exists PASS 23a (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 23b: O_APPEND check ======================= 20:21:58 (1713486118) striped dir -i1 -c2 -H crush2 /mnt/lustre/d23b.sanity /mnt/lustre/d23b.sanity/f23b.sanity has size 8 OK PASS 23b (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 23c: O_APPEND size checks for tiny writes ========================================================== 20:22:02 (1713486122) 800+0 records in 800+0 records out 6400 bytes (6.4 kB) copied, 0.438017 s, 14.6 kB/s /mnt/lustre/f23c.sanity has size 6400 OK 800+0 records in 800+0 records out 6400 bytes (6.4 kB) copied, 1.02437 s, 6.2 kB/s 800+0 records in 800+0 records out 6400 bytes (6.4 kB) copied, 1.0321 s, 6.2 kB/s /mnt/lustre/f23c.sanity has size 12800 OK 4+0 records in 4+0 records out 16384 bytes (16 kB) copied, 0.00939698 s, 1.7 MB/s 100+0 records in 100+0 records out 800 bytes (800 B) copied, 0.0642789 s, 12.4 kB/s /mnt/lustre/f23c.sanity has size 17184 OK 11+0 records in 11+0 records out 45089 bytes (45 kB) copied, 0.0177084 s, 2.5 MB/s 173+0 records in 173+0 records out 2941 bytes (2.9 kB) copied, 0.109409 s, 26.9 kB/s /mnt/lustre/f23c.sanity has size 48030 OK PASS 23c (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 23d: file offset is correct after appending writes ========================================================== 20:22:07 (1713486127) PASS 23d (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 24a: rename file to non-existent target === 20:22:10 (1713486130) -- same directory rename striped dir -i0 -c2 -H fnv_1a_64 /mnt/lustre/d24a.sanity /mnt/lustre/d24a.sanity/f24a.sanity.2 has type file OK PASS 24a (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 24b: rename file to existing target ======= 20:22:14 (1713486134) striped dir -i0 -c2 -H crush /mnt/lustre/d24b.sanity /mnt/lustre/d24b.sanity/f24b.sanity.1: absent OK /mnt/lustre/d24b.sanity/f24b.sanity.2 has type file OK PASS 24b (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 24c: rename directory to non-existent target ========================================================== 20:22:17 (1713486137) striped dir -i0 -c2 -H crush2 /mnt/lustre/d24c.sanity striped dir -i0 -c2 -H all_char /mnt/lustre/d24c.sanity/d24c.1 /mnt/lustre/d24c.sanity/d24c.1: absent OK /mnt/lustre/d24c.sanity/d24c.2 has type dir OK PASS 24c (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 24d: rename directory to existing target == 20:22:21 (1713486141) striped dir -i0 -c1 -H crush2 /mnt/lustre/d24d.sanity striped dir -i0 -c1 -H crush2 /mnt/lustre/d24d.sanity/d24d.1 striped dir -i0 -c1 -H all_char /mnt/lustre/d24d.sanity/d24d.2 /mnt/lustre/d24d.sanity/d24d.1: absent OK /mnt/lustre/d24d.sanity/d24d.2 has type dir OK PASS 24d (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 24e: touch .../R5a/f; rename .../R5a/f .../R5b/g ================================================================ 20:22:24 (1713486144) -- cross directory renames -- striped dir -i0 -c2 -H all_char /mnt/lustre/R5a striped dir -i0 -c2 -H crush2 /mnt/lustre/R5b /mnt/lustre/R5a/f: absent OK /mnt/lustre/R5b/g has type file OK PASS 24e (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 24f: touch .../R6a/f R6b/g; mv .../R6a/f .../R6b/g ============================================================== 20:22:27 (1713486147) striped dir -i0 -c2 -H all_char /mnt/lustre/R6a striped dir -i0 -c2 -H crush2 /mnt/lustre/R6b /mnt/lustre/R6a/f: absent OK /mnt/lustre/R6b/g has type file OK PASS 24f (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 24g: mkdir .../R7{a,b}/d; mv .../R7a/d .../R7b/e ================================================================ 20:22:30 (1713486150) striped dir -i0 -c2 -H crush2 /mnt/lustre/R7a striped dir -i0 -c2 -H all_char /mnt/lustre/R7b striped dir -i0 -c2 -H fnv_1a_64 /mnt/lustre/R7a/d /mnt/lustre/R7a/d: absent OK /mnt/lustre/R7b/e has type dir OK PASS 24g (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 24h: mkdir .../R8{a,b}/{d,e}; rename .../R8a/d .../R8b/e ========================================================== 20:22:34 (1713486154) striped dir -i0 -c1 -H fnv_1a_64 /mnt/lustre/R8a striped dir -i0 -c1 -H all_char /mnt/lustre/R8b striped dir -i0 -c1 -H all_char /mnt/lustre/R8a/d striped dir -i0 -c1 -H fnv_1a_64 /mnt/lustre/R8b/e /mnt/lustre/R8a/d: absent OK /mnt/lustre/R8b/e has type dir OK PASS 24h (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 24i: rename file to dir error: touch f ; mkdir a ; rename f a ========================================================== 20:22:38 (1713486158) -- rename error cases striped dir -i0 -c2 -H crush2 /mnt/lustre/R9 striped dir -i0 -c2 -H crush /mnt/lustre/R9/a rename '/mnt/lustre/R9/f' returned -1: Is a directory /mnt/lustre/R9/f has type file OK /mnt/lustre/R9/a has type dir OK /mnt/lustre/R9/a/f: absent OK PASS 24i (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 24j: source does not exist ====================================================================================== 20:22:41 (1713486161) striped dir -i0 -c2 -H all_char /mnt/lustre/R10 rename '/mnt/lustre/R10/f' returned -1: No such file or directory /mnt/lustre/R10 has type dir OK /mnt/lustre/R10/f: absent OK /mnt/lustre/R10/g: absent OK PASS 24j (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 24k: touch .../R11a/f; mv .../R11a/f .../R11a/d ================================================================= 20:22:44 (1713486164) striped dir -i0 -c2 -H fnv_1a_64 /mnt/lustre/R11a striped dir -i0 -c2 -H fnv_1a_64 /mnt/lustre/R11a/d /mnt/lustre/R11a/f: absent OK /mnt/lustre/R11a/d/f has type file OK PASS 24k (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 24l: Renaming a file to itself ================================================================================== 20:22:48 (1713486168) PASS 24l (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 24m: Renaming a file to a hard link to itself =================================================================== 20:22:52 (1713486172) /mnt/lustre/f24m has type file OK /mnt/lustre/f24m2 has type file OK PASS 24m (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 24n: Statting the old file after renaming (Posix rename 2) ========================================================== 20:22:57 (1713486177) /mnt/lustre/f24n: absent OK PASS 24n (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 24o: rename of files during htree split === 20:23:01 (1713486181) striped dir -i0 -c2 -H crush /mnt/lustre/d24o.sanity using random seed 1804289383 32s 1 iterations 0/0/0 errors 65s 2 iterations 0/0/0 errors 90s 3 iterations 0/0/0 errors 115s 4 iterations 0/0/0 errors 140s 5 iterations 0/0/0 errors 165s 6 iterations 0/0/0 errors 189s 7 iterations 0/0/0 errors 212s 8 iterations 0/0/0 errors 234s 9 iterations 0/0/0 errors 259s 10 iterations 0/0/0 errors PASS 24o (261s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 24p: mkdir .../R12{a,b}; rename .../R12a .../R12b ========================================================== 20:27:24 (1713486444) striped dir -i0 -c2 -H all_char /mnt/lustre/R12a striped dir -i0 -c2 -H crush /mnt/lustre/R12b /mnt/lustre/R12a: absent OK /mnt/lustre/R12b has type dir OK PASS 24p (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 24q: mkdir .../R13{a,b}; open R13b rename R13a R13b ============================================================= 20:27:28 (1713486448) striped dir -i0 -c2 -H crush2 /mnt/lustre/R13a striped dir -i0 -c2 -H fnv_1a_64 /mnt/lustre/R13b multiop /mnt/lustre/R13b vD_c TMPPIPE=/tmp/multiop_open_wait_pipe.7531 /mnt/lustre/R13a: absent OK /mnt/lustre/R13b has type dir OK PASS 24q (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 24r: mkdir .../R14a/b; rename .../R14a .../R14a/b =============================================================== 20:27:32 (1713486452) striped dir -i0 -c2 -H crush2 /mnt/lustre/R14a striped dir -i0 -c2 -H crush2 /mnt/lustre/R14a/b rename '/mnt/lustre/R14a' returned -1: Invalid argument /mnt/lustre/R14a has type dir OK /mnt/lustre/R14a/b has type dir OK PASS 24r (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 24s: mkdir .../R15a/b/c; rename .../R15a .../R15a/b/c =========================================================== 20:27:35 (1713486455) striped dir -i0 -c2 -H crush2 /mnt/lustre/R15a striped dir -i0 -c2 -H crush /mnt/lustre/R15a/b striped dir -i0 -c2 -H fnv_1a_64 /mnt/lustre/R15a/b/c rename '/mnt/lustre/R15a' returned -1: Invalid argument /mnt/lustre/R15a has type dir OK /mnt/lustre/R15a/b/c has type dir OK PASS 24s (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 24t: mkdir .../R16a/b/c; rename .../R16a/b/c .../R16a =========================================================== 20:27:39 (1713486459) striped dir -i0 -c2 -H fnv_1a_64 /mnt/lustre/R16a striped dir -i0 -c2 -H fnv_1a_64 /mnt/lustre/R16a/b striped dir -i0 -c2 -H all_char /mnt/lustre/R16a/b/c rename '/mnt/lustre/R16a/b/c' returned -1: Directory not empty /mnt/lustre/R16a has type dir OK /mnt/lustre/R16a/b/c has type dir OK PASS 24t (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 24u: create stripe file =================== 20:27:42 (1713486462) /mnt/lustre/f24u.sanity has size 2097152 OK PASS 24u (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 24v: list large directory (test hash collision, b=17560) ========================================================== 20:27:46 (1713486466) striped dir -i0 -c2 -H all_char /mnt/lustre/d24v.sanity - create 8444 (time 1713486477.15 total 10.00 last 844.38) - create 16807 (time 1713486487.15 total 20.00 last 836.27) - create 20000 (time 1713486491.42 total 24.27 last 747.76) - create 27766 (time 1713486501.42 total 34.27 last 776.57) - create 30000 (time 1713486504.10 total 36.94 last 836.11) - create 38084 (time 1713486514.10 total 46.94 last 808.39) - create 40000 (time 1713486516.12 total 48.96 last 948.51) - create 49871 (time 1713486526.12 total 58.96 last 987.03) - create 60000 (time 1713486535.89 total 68.73 last 1036.89) - create 67358 (time 1713486545.89 total 78.73 last 735.70) - create 70000 (time 1713486550.17 total 83.02 last 616.93) - create 76614 (time 1713486560.17 total 93.02 last 661.37) - create 80000 (time 1713486564.09 total 96.94 last 863.83) - create 88501 (time 1713486574.09 total 106.94 last 850.08) - create 97520 (time 1713486584.09 total 116.94 last 901.86) total: 100000 create in 119.29 seconds: 838.28 ops/second mdc.lustre-MDT0000-mdc-ffff8800b6384000.stats=clear mdc.lustre-MDT0001-mdc-ffff8800b6384000.stats=clear readpages: 6 rpc_max: 7-2/+1 - unlinked 0 (time 1713486596 ; total 0 ; last 0) - unlinked 10000 (time 1713486617 ; total 21 ; last 21) - unlinked 20000 (time 1713486636 ; total 40 ; last 19) - unlinked 30000 (time 1713486654 ; total 58 ; last 18) - unlinked 40000 (time 1713486672 ; total 76 ; last 18) - unlinked 50000 (time 1713486688 ; total 92 ; last 16) - unlinked 60000 (time 1713486707 ; total 111 ; last 19) - unlinked 70000 (time 1713486725 ; total 129 ; last 18) - unlinked 80000 (time 1713486742 ; total 146 ; last 17) - unlinked 90000 (time 1713486758 ; total 162 ; last 16) total: 100000 unlinks in 178 seconds: 561.797729 unlinks/second Waiting for MDT destroys to complete cleanup time 180 PASS 24v (309s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 24w: Reading a file larger than 4Gb ======= 20:32:56 (1713486776) 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0305808 s, 34.3 MB/s 1+0 records in 1+0 records out 234852 bytes (235 kB) copied, 0.0114211 s, 20.6 MB/s 0+1 records in 0+1 records out 234852 bytes (235 kB) copied, 0.0126872 s, 18.5 MB/s PASS 24w (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 24x: cross MDT rename/link ================ 20:32:59 (1713486779) striped dir -i0 -c2 -H crush /mnt/lustre/d24x.sanity striped dir -i0 -c2 -H crush2 /mnt/lustre/d24x.sanity/src_dir striped dir -i0 -c2 -H all_char /mnt/lustre/d24x.sanity/remote_dir/tgt_dir PASS 24x (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 24y: rename/link on the same dir should succeed ========================================================== 20:33:03 (1713486783) striped dir -i0 -c2 -H crush /mnt/lustre/d24y.sanity striped dir -i0 -c2 -H fnv_1a_64 /mnt/lustre/d24y.sanity/remote_dir/src_dir striped dir -i0 -c2 -H fnv_1a_64 /mnt/lustre/d24y.sanity/remote_dir/tgt_dir PASS 24y (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 24z: cross-MDT rename is done as cp ======= 20:33:06 (1713486786) mdt.lustre-MDT0000.enable_remote_rename=0 mdt.lustre-MDT0001.enable_remote_rename=0 mdt.lustre-MDT0000.enable_remote_rename=1 mdt.lustre-MDT0001.enable_remote_rename=1 PASS 24z (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 24A: readdir() returns correct number of entries. ========================================================== 20:33:10 (1713486790) striped dir -i0 -c2 -H all_char /mnt/lustre/d24A.sanity total: 5000 create in 4.65 seconds: 1076.23 ops/second - unlinked 0 (time 1713486802 ; total 0 ; last 0) total: 5000 unlinks in 5 seconds: 1000.000000 unlinks/second Waiting for MDT destroys to complete cleanup time 7 PASS 24A (18s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 24B: readdir for striped dir return correct number of entries ========================================================== 20:33:29 (1713486809) striped dir -i0 -c2 -H crush /mnt/lustre/d24B.sanity PASS 24B (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 24C: check .. in striped dir ============== 20:33:33 (1713486813) PASS 24C (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 24E: cross MDT rename/link ================ 20:33:36 (1713486816) SKIP: sanity test_24E needs >= 4 MDTs SKIP 24E (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 24F: hash order vs readdir (LU-11330) ===== 20:33:38 (1713486818) 100 repeats PASS 24F (14s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 24G: migrate symlink in rename ============ 20:33:54 (1713486834) PASS 24G (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 24H: repeat FLD_QUERY rpc ================= 20:33:57 (1713486837) striped dir -i1 -c1 -H crush /mnt/lustre/d24H.sanity fail_loc=0x80001103 PASS 24H (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 25a: create file in symlinked directory ========================================================================= 20:34:00 (1713486840) == symlink sanity ============================================= striped dir -i1 -c2 -H fnv_1a_64 /mnt/lustre/d25 PASS 25a (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 25b: lookup file in symlinked directory ========================================================================= 20:34:03 (1713486843) /mnt/lustre/s25/foo has type file OK PASS 25b (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 26a: multiple component symlink ================================================================================= 20:34:06 (1713486846) striped dir -i0 -c2 -H fnv_1a_64 /mnt/lustre/d26 striped dir -i0 -c2 -H fnv_1a_64 /mnt/lustre/d26/d26-2 PASS 26a (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 26b: multiple component symlink at end of lookup ================================================================ 20:34:09 (1713486849) striped dir -i0 -c2 -H fnv_1a_64 /mnt/lustre/d26b.sanity/d26-2 PASS 26b (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 26c: chain of symlinks ==================== 20:34:13 (1713486853) striped dir -i0 -c2 -H crush2 /mnt/lustre/d26.2 PASS 26c (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 26d: create multiple component recursive symlink ========================================================== 20:34:16 (1713486856) PASS 26d (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 26e: unlink multiple component recursive symlink ========================================================== 20:34:19 (1713486859) PASS 26e (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 26f: rm -r of a directory which has recursive symlink ========================================================== 20:34:22 (1713486862) striped dir -i0 -c2 -H all_char /mnt/lustre/d26f.sanity striped dir -i0 -c2 -H crush2 /mnt/lustre/d26f.sanity/f26f.sanity striped dir -i0 -c2 -H all_char lndir/bar1 striped dir -i0 -c2 -H all_char /mnt/lustre/d26f.sanity/f26f.sanity/f26f.sanity /mnt/lustre/f26f.sanity: absent OK PASS 26f (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 27a: one stripe file ====================== 20:34:26 (1713486866) striped dir -i1 -c2 -H fnv_1a_64 /mnt/lustre/d27a.sanity /mnt/lustre/d27a.sanity stripe_count: 1 stripe_size: 4194304 pattern: 0 stripe_offset: -1 /mnt/lustre/d27a.sanity/f27a.sanity has type file OK PASS 27a (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 27b: create and write to two stripe file == 20:34:29 (1713486869) striped dir -i1 -c2 -H fnv_1a_64 /mnt/lustre/d27b.sanity 2 4+0 records in 4+0 records out 16384 bytes (16 kB) copied, 0.00531707 s, 3.1 MB/s PASS 27b (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 27ca: one stripe on specified OST ========= 20:34:32 (1713486872) striped dir -i1 -c2 -H crush2 /mnt/lustre/d27ca.sanity 1 4+0 records in 4+0 records out 4194304 bytes (4.2 MB) copied, 0.129838 s, 32.3 MB/s PASS 27ca (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 27cb: two stripes on specified OSTs ======= 20:34:35 (1713486875) striped dir -i1 -c2 -H all_char /mnt/lustre/d27cb.sanity /mnt/lustre/d27cb.sanity/f27cb.sanity lmm_stripe_count: 2 lmm_stripe_size: 4194304 lmm_pattern: raid0 lmm_layout_gen: 0 lmm_stripe_offset: 1 obdidx objid objid group 1 237 0xed 0x2c0000400 0 236 0xec 0x280000400 4+0 records in 4+0 records out 4194304 bytes (4.2 MB) copied, 0.130046 s, 32.3 MB/s PASS 27cb (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 27cc: two stripes on the same OST ========= 20:34:38 (1713486878) striped dir -i1 -c2 -H fnv_1a_64 /mnt/lustre/d27cc.sanity /mnt/lustre/d27cc.sanity/f27cc.sanity lmm_stripe_count: 2 lmm_stripe_size: 4194304 lmm_pattern: raid0,overstriped lmm_layout_gen: 0 lmm_stripe_offset: 0 obdidx objid objid group 0 237 0xed 0x280000400 0 238 0xee 0x280000400 4+0 records in 4+0 records out 4194304 bytes (4.2 MB) copied, 0.115821 s, 36.2 MB/s PASS 27cc (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 27cd: four stripes on two OSTs ============ 20:34:42 (1713486882) striped dir -i1 -c2 -H fnv_1a_64 /mnt/lustre/d27cd.sanity /mnt/lustre/d27cd.sanity/f27cd.sanity lmm_stripe_count: 4 lmm_stripe_size: 4194304 lmm_pattern: raid0,overstriped lmm_layout_gen: 0 lmm_stripe_offset: 0 obdidx objid objid group 0 258 0x102 0x280000401 1 258 0x102 0x2c0000401 1 259 0x103 0x2c0000401 0 259 0x103 0x280000401 4+0 records in 4+0 records out 4194304 bytes (4.2 MB) copied, 0.107564 s, 39.0 MB/s PASS 27cd (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 27ce: more stripes than OSTs with -o ====== 20:34:45 (1713486885) striped dir -i1 -c2 -H fnv_1a_64 /mnt/lustre/d27ce.sanity /mnt/lustre/d27ce.sanity/f27ce.sanity lmm_stripe_count: 3 lmm_stripe_size: 4194304 lmm_pattern: raid0,overstriped lmm_layout_gen: 0 lmm_stripe_offset: 0 obdidx objid objid group 0 239 0xef 0x280000400 0 240 0xf0 0x280000400 0 241 0xf1 0x280000400 4+0 records in 4+0 records out 4194304 bytes (4.2 MB) copied, 0.125027 s, 33.5 MB/s PASS 27ce (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 27cf: 'setstripe -o' on inactive OSTs should return error ========================================================== 20:34:49 (1713486889) striped dir -i1 -c2 -H all_char /mnt/lustre/d27cf.sanity pdsh@oleg329-client: oleg329-server: ssh exited with exit code 1 lfs setstripe: setstripe error for '/mnt/lustre/d27cf.sanity/f27cf.sanity': inactive OST among your specified 1 OST(s) pdsh@oleg329-client: oleg329-server: ssh exited with exit code 1 PASS 27cf (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 27cg: 1000 shouldn't cause too many credits ========================================================== 20:34:54 (1713486894) lmm_stripe_count: 1000 lmm_stripe_size: 4194304 lmm_pattern: raid0,overstriped lmm_stripe_offset: 0 PASS 27cg (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 27d: create file with default settings ==== 20:34:59 (1713486899) striped dir -i1 -c2 -H crush /mnt/lustre/d27d.sanity /mnt/lustre/d27d.sanity/f27d.sanity has type file OK 4+0 records in 4+0 records out 16384 bytes (16 kB) copied, 0.00484449 s, 3.4 MB/s PASS 27d (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 27e: setstripe existing file (should return error) ========================================================== 20:35:02 (1713486902) striped dir -i1 -c2 -H fnv_1a_64 /mnt/lustre/d27e.sanity lfs setstripe: setstripe error for '/mnt/lustre/d27e.sanity/f27e.sanity': stripe already set /mnt/lustre/d27e.sanity/f27e.sanity has type file OK PASS 27e (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 27f: setstripe with bad stripe size (should return error) ========================================================== 20:35:05 (1713486905) striped dir -i1 -c2 -H crush /mnt/lustre/d27f.sanity lfs setstripe setstripe: invalid stripe size '100' Create a file with specified striping/composite layout, or set the default layout on an existing directory. Usage: setstripe [--component-add|--component-del|--delete|-d] [--comp-set --comp-id|-I COMP_ID|--comp-flags=COMP_FLAGS] [--component-end|-E END_OFFSET] [--copy=SOURCE_LAYOUT_FILE]|--yaml|-y YAML_TEMPLATE_FILE] [--extension-size|--ext-size|-z EXT_SIZE] [--help|-h] [--foreign=FOREIGN_TYPE --xattr|-x LAYOUT] [--layout|-L PATTERN] [--mode FILE_MODE] [--mirror-count|-N[MIRROR_COUNT]] [--ost|-o OST_INDEX[,OST_INDEX,...]] [--overstripe-count|-C STRIPE_COUNT] [--pool|-p POOL_NAME] [--stripe-count|-c STRIPE_COUNT] [--stripe-index|-i START_OST_IDX] [--stripe-size|-S STRIPE_SIZE] FILENAME|DIRECTORY Can't lstat /mnt/lustre/d27f.sanity/f27f.sanity: No such file or directory 4+0 records in 4+0 records out 16384 bytes (16 kB) copied, 0.00650073 s, 2.5 MB/s /mnt/lustre/d27f.sanity/f27f.sanity lmm_stripe_count: 1 lmm_stripe_size: 4194304 lmm_pattern: raid0 lmm_layout_gen: 0 lmm_stripe_offset: 0 obdidx objid objid group 0 761 0x2f9 0x280000401 PASS 27f (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 27g: /home/green/git/lustre-release/lustre/utils/lfs getstripe with no objects ========================================================== 20:35:08 (1713486908) striped dir -i1 -c2 -H crush2 /mnt/lustre/d27g.sanity /mnt/lustre/d27g.sanity/f27g.sanity has no stripe info PASS 27g (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 27ga: /home/green/git/lustre-release/lustre/utils/lfs getstripe with missing file (should return error) ========================================================== 20:35:12 (1713486912) striped dir -i1 -c2 -H all_char /mnt/lustre/d27ga.sanity 0 lfs: getstripe for '/mnt/lustre/d27ga.sanity/f27ga.sanity.2' failed: No such file or directory PASS 27ga (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 27i: /home/green/git/lustre-release/lustre/utils/lfs getstripe with some objects ========================================================== 20:35:15 (1713486915) striped dir -i1 -c2 -H fnv_1a_64 /mnt/lustre/d27i.sanity PASS 27i (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 27j: setstripe with bad stripe offset (should return error) ========================================================== 20:35:18 (1713486918) striped dir -i1 -c2 -H crush2 /mnt/lustre/d27j.sanity lfs setstripe: setstripe error for '/mnt/lustre/d27j.sanity/f27j.sanity': Invalid argument PASS 27j (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 27k: limit i_blksize for broken user apps ========================================================== 20:35:21 (1713486921) striped dir -i1 -c2 -H fnv_1a_64 /mnt/lustre/d27k.sanity 1+0 records in 1+0 records out 4096 bytes (4.1 kB) copied, 0.00309213 s, 1.3 MB/s PASS 27k (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 27l: check setstripe permissions (should return error) ========================================================== 20:35:25 (1713486925) running as uid/gid/euid/egid 500/500/500/500, groups: [/home/green/git/lustre-release/lustre/utils/lfs] [setstripe] [-c] [1] [/mnt/lustre/f27l.sanity] lfs setstripe: unable to open '/mnt/lustre/f27l.sanity': Permission denied (13) PASS 27l (1s) debug_raw_pointers=0 debug_raw_pointers=0 SKIP: sanity test_27m skipping SLOW test 27m debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 27n: create file with some full OSTs ====== 20:35:28 (1713486928) fail_loc=0 Waiting for MDT destroys to complete pdsh@oleg329-client: oleg329-server: ssh exited with exit code 1 OSTIDX=0 MDTIDX=0 osp.lustre-OST0000-osc-MDT0000.prealloc_force_new_seq=0 osp.lustre-OST0000-osc-MDT0000.prealloc_last_id=801 osp.lustre-OST0000-osc-MDT0000.prealloc_last_seq=0x280000401 osp.lustre-OST0000-osc-MDT0000.prealloc_next_id=763 osp.lustre-OST0000-osc-MDT0000.prealloc_next_seq=0x280000401 osp.lustre-OST0000-osc-MDT0000.prealloc_reserved=0 osp.lustre-OST0000-osc-MDT0000.prealloc_status=0 osp.lustre-OST0001-osc-MDT0000.prealloc_force_new_seq=0 osp.lustre-OST0001-osc-MDT0000.prealloc_last_id=801 osp.lustre-OST0001-osc-MDT0000.prealloc_last_seq=0x2c0000401 osp.lustre-OST0001-osc-MDT0000.prealloc_next_id=763 osp.lustre-OST0001-osc-MDT0000.prealloc_next_seq=0x2c0000401 osp.lustre-OST0001-osc-MDT0000.prealloc_reserved=0 osp.lustre-OST0001-osc-MDT0000.prealloc_status=0 fail_val=0 fail_loc=0x215 Creating to objid 801 on ost lustre-OST0000... total: 40 open/close in 0.23 seconds: 173.80 ops/second osp.lustre-OST0000-osc-MDT0000.prealloc_force_new_seq=0 osp.lustre-OST0000-osc-MDT0000.prealloc_last_id=801 osp.lustre-OST0000-osc-MDT0000.prealloc_last_seq=0x280000401 osp.lustre-OST0000-osc-MDT0000.prealloc_next_id=763 osp.lustre-OST0000-osc-MDT0000.prealloc_next_seq=0x280000401 osp.lustre-OST0000-osc-MDT0000.prealloc_reserved=0 osp.lustre-OST0000-osc-MDT0000.prealloc_status=-28 osp.lustre-OST0001-osc-MDT0000.prealloc_force_new_seq=0 osp.lustre-OST0001-osc-MDT0000.prealloc_last_id=833 osp.lustre-OST0001-osc-MDT0000.prealloc_last_seq=0x2c0000401 osp.lustre-OST0001-osc-MDT0000.prealloc_next_id=803 osp.lustre-OST0001-osc-MDT0000.prealloc_next_seq=0x2c0000401 osp.lustre-OST0001-osc-MDT0000.prealloc_reserved=0 osp.lustre-OST0001-osc-MDT0000.prealloc_status=0 fail_loc=0x80000215 /mnt/lustre/d27n.sanity/f27n.sanity lmm_stripe_count: 2 lmm_stripe_size: 4194304 lmm_pattern: raid0 lmm_layout_gen: 0 lmm_stripe_offset: 0 obdidx objid objid group 0 763 0x2fb 0x280000401 1 803 0x323 0x2c0000401 fail_loc=0 Waiting for MDT destroys to complete pdsh@oleg329-client: oleg329-server: ssh exited with exit code 1 PASS 27n (17s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 27o: create file with all full OSTs (should error) ========================================================== 20:35:46 (1713486946) fail_loc=0 Waiting for MDT destroys to complete pdsh@oleg329-client: oleg329-server: ssh exited with exit code 1 OSTIDX=0 MDTIDX=0 osp.lustre-OST0000-osc-MDT0000.prealloc_force_new_seq=0 osp.lustre-OST0000-osc-MDT0000.prealloc_last_id=801 osp.lustre-OST0000-osc-MDT0000.prealloc_last_seq=0x280000401 osp.lustre-OST0000-osc-MDT0000.prealloc_next_id=764 osp.lustre-OST0000-osc-MDT0000.prealloc_next_seq=0x280000401 osp.lustre-OST0000-osc-MDT0000.prealloc_reserved=0 osp.lustre-OST0000-osc-MDT0000.prealloc_status=0 osp.lustre-OST0001-osc-MDT0000.prealloc_force_new_seq=0 osp.lustre-OST0001-osc-MDT0000.prealloc_last_id=833 osp.lustre-OST0001-osc-MDT0000.prealloc_last_seq=0x2c0000401 osp.lustre-OST0001-osc-MDT0000.prealloc_next_id=804 osp.lustre-OST0001-osc-MDT0000.prealloc_next_seq=0x2c0000401 osp.lustre-OST0001-osc-MDT0000.prealloc_reserved=0 osp.lustre-OST0001-osc-MDT0000.prealloc_status=0 fail_val=-1 fail_loc=0x215 Creating to objid 801 on ost lustre-OST0000... total: 39 open/close in 0.23 seconds: 170.83 ops/second osp.lustre-OST0000-osc-MDT0000.prealloc_force_new_seq=0 osp.lustre-OST0000-osc-MDT0000.prealloc_last_id=833 osp.lustre-OST0000-osc-MDT0000.prealloc_last_seq=0x280000401 osp.lustre-OST0000-osc-MDT0000.prealloc_next_id=803 osp.lustre-OST0000-osc-MDT0000.prealloc_next_seq=0x280000401 osp.lustre-OST0000-osc-MDT0000.prealloc_reserved=0 osp.lustre-OST0000-osc-MDT0000.prealloc_status=0 osp.lustre-OST0001-osc-MDT0000.prealloc_force_new_seq=0 osp.lustre-OST0001-osc-MDT0000.prealloc_last_id=833 osp.lustre-OST0001-osc-MDT0000.prealloc_last_seq=0x2c0000401 osp.lustre-OST0001-osc-MDT0000.prealloc_next_id=804 osp.lustre-OST0001-osc-MDT0000.prealloc_next_seq=0x2c0000401 osp.lustre-OST0001-osc-MDT0000.prealloc_reserved=0 osp.lustre-OST0001-osc-MDT0000.prealloc_status=0 fail_loc=0x215 lfs mkdir: dirstripe error on '/mnt/lustre/d27o.sanity': stripe already set lfs setdirstripe: cannot create dir '/mnt/lustre/d27o.sanity': File exists OSTIDX=1 MDTIDX=0 osp.lustre-OST0000-osc-MDT0000.prealloc_force_new_seq=0 osp.lustre-OST0000-osc-MDT0000.prealloc_last_id=833 osp.lustre-OST0000-osc-MDT0000.prealloc_last_seq=0x280000401 osp.lustre-OST0000-osc-MDT0000.prealloc_next_id=803 osp.lustre-OST0000-osc-MDT0000.prealloc_next_seq=0x280000401 osp.lustre-OST0000-osc-MDT0000.prealloc_reserved=0 osp.lustre-OST0000-osc-MDT0000.prealloc_status=0 osp.lustre-OST0001-osc-MDT0000.prealloc_force_new_seq=0 osp.lustre-OST0001-osc-MDT0000.prealloc_last_id=833 osp.lustre-OST0001-osc-MDT0000.prealloc_last_seq=0x2c0000401 osp.lustre-OST0001-osc-MDT0000.prealloc_next_id=804 osp.lustre-OST0001-osc-MDT0000.prealloc_next_seq=0x2c0000401 osp.lustre-OST0001-osc-MDT0000.prealloc_reserved=0 osp.lustre-OST0001-osc-MDT0000.prealloc_status=0 fail_val=-1 fail_loc=0x215 Creating to objid 833 on ost lustre-OST0001... open(/mnt/lustre/d27o.sanity/lustre-OST0001/f804) error: No space left on device total: 0 open/close in 0.01 seconds: 0.00 ops/second osp.lustre-OST0000-osc-MDT0000.prealloc_force_new_seq=0 osp.lustre-OST0000-osc-MDT0000.prealloc_last_id=833 osp.lustre-OST0000-osc-MDT0000.prealloc_last_seq=0x280000401 osp.lustre-OST0000-osc-MDT0000.prealloc_next_id=803 osp.lustre-OST0000-osc-MDT0000.prealloc_next_seq=0x280000401 osp.lustre-OST0000-osc-MDT0000.prealloc_reserved=0 osp.lustre-OST0000-osc-MDT0000.prealloc_status=-28 osp.lustre-OST0001-osc-MDT0000.prealloc_force_new_seq=0 osp.lustre-OST0001-osc-MDT0000.prealloc_last_id=833 osp.lustre-OST0001-osc-MDT0000.prealloc_last_seq=0x2c0000401 osp.lustre-OST0001-osc-MDT0000.prealloc_next_id=804 osp.lustre-OST0001-osc-MDT0000.prealloc_next_seq=0x2c0000401 osp.lustre-OST0001-osc-MDT0000.prealloc_reserved=0 osp.lustre-OST0001-osc-MDT0000.prealloc_status=-28 fail_loc=0x215 touch: cannot touch '/mnt/lustre/d27o.sanity/f27o.sanity': No space left on device fail_loc=0 Waiting for MDT destroys to complete Waiting 10s for '' pdsh@oleg329-client: oleg329-server: ssh exited with exit code 1 PASS 27o (22s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 27oo: don't let few threads to reserve too many objects ========================================================== 20:36:09 (1713486969) Waiting for MDT destroys to complete lov.lustre-MDT0000-mdtlov.qos_threshold_rr=0% lov.lustre-MDT0001-mdtlov.qos_threshold_rr=0% Stopping /mnt/lustre-ost1 (opts:) on oleg329-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg329-server: oleg329-server.virtnet: executing set_default_debug all all pdsh@oleg329-client: oleg329-server: ssh exited with exit code 1 Started lustre-OST0000 lov.lustre-MDT0000-mdtlov.qos_threshold_rr=17% lov.lustre-MDT0001-mdtlov.qos_threshold_rr=17% PASS 27oo (27s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 27p: append to a truncated file with some full OSTs ========================================================== 20:36:37 (1713486997) fail_loc=0 Waiting for MDT destroys to complete pdsh@oleg329-client: oleg329-server: ssh exited with exit code 1 striped dir -i1 -c2 -H all_char /mnt/lustre/d27p.sanity /mnt/lustre/d27p.sanity/f27p.sanity has size 80000000 OK lfs mkdir: dirstripe error on '/mnt/lustre/d27p.sanity': stripe already set lfs setdirstripe: cannot create dir '/mnt/lustre/d27p.sanity': File exists OSTIDX=0 MDTIDX=1 osp.lustre-OST0000-osc-MDT0001.prealloc_force_new_seq=0 osp.lustre-OST0000-osc-MDT0001.prealloc_last_id=289 osp.lustre-OST0000-osc-MDT0001.prealloc_last_seq=0x280000400 osp.lustre-OST0000-osc-MDT0001.prealloc_next_id=244 osp.lustre-OST0000-osc-MDT0001.prealloc_next_seq=0x280000400 osp.lustre-OST0000-osc-MDT0001.prealloc_reserved=0 osp.lustre-OST0000-osc-MDT0001.prealloc_status=0 osp.lustre-OST0001-osc-MDT0001.prealloc_force_new_seq=0 osp.lustre-OST0001-osc-MDT0001.prealloc_last_id=257 osp.lustre-OST0001-osc-MDT0001.prealloc_last_seq=0x2c0000400 osp.lustre-OST0001-osc-MDT0001.prealloc_next_id=241 osp.lustre-OST0001-osc-MDT0001.prealloc_next_seq=0x2c0000400 osp.lustre-OST0001-osc-MDT0001.prealloc_reserved=0 osp.lustre-OST0001-osc-MDT0001.prealloc_status=0 fail_val=0 fail_loc=0x215 Creating to objid 289 on ost lustre-OST0000... total: 47 open/close in 0.40 seconds: 116.27 ops/second osp.lustre-OST0000-osc-MDT0001.prealloc_force_new_seq=0 osp.lustre-OST0000-osc-MDT0001.prealloc_last_id=321 osp.lustre-OST0000-osc-MDT0001.prealloc_last_seq=0x280000400 osp.lustre-OST0000-osc-MDT0001.prealloc_next_id=291 osp.lustre-OST0000-osc-MDT0001.prealloc_next_seq=0x280000400 osp.lustre-OST0000-osc-MDT0001.prealloc_reserved=0 osp.lustre-OST0000-osc-MDT0001.prealloc_status=0 osp.lustre-OST0001-osc-MDT0001.prealloc_force_new_seq=0 osp.lustre-OST0001-osc-MDT0001.prealloc_last_id=257 osp.lustre-OST0001-osc-MDT0001.prealloc_last_seq=0x2c0000400 osp.lustre-OST0001-osc-MDT0001.prealloc_next_id=241 osp.lustre-OST0001-osc-MDT0001.prealloc_next_seq=0x2c0000400 osp.lustre-OST0001-osc-MDT0001.prealloc_reserved=0 osp.lustre-OST0001-osc-MDT0001.prealloc_status=0 fail_loc=0x80000215 /mnt/lustre/d27p.sanity/f27p.sanity has size 80000004 OK /mnt/lustre/d27p.sanity/f27p.sanity lmm_stripe_count: 1 lmm_stripe_size: 4194304 lmm_pattern: raid0 lmm_layout_gen: 0 lmm_stripe_offset: 1 obdidx objid objid group 1 805 0x325 0x2c0000401 fail_loc=0 Waiting for MDT destroys to complete pdsh@oleg329-client: oleg329-server: ssh exited with exit code 1 PASS 27p (23s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 27q: append to truncated file with all OSTs full (should error) ========================================================== 20:37:02 (1713487022) fail_loc=0 Waiting for MDT destroys to complete pdsh@oleg329-client: oleg329-server: ssh exited with exit code 1 /mnt/lustre/d27q.sanity/f27q.sanity has size 80000000 OK lfs mkdir: dirstripe error on '/mnt/lustre/d27q.sanity': stripe already set lfs setdirstripe: cannot create dir '/mnt/lustre/d27q.sanity': File exists OSTIDX=0 MDTIDX=0 osp.lustre-OST0000-osc-MDT0000.prealloc_force_new_seq=0 osp.lustre-OST0000-osc-MDT0000.prealloc_last_id=833 osp.lustre-OST0000-osc-MDT0000.prealloc_last_seq=0x280000401 osp.lustre-OST0000-osc-MDT0000.prealloc_next_id=806 osp.lustre-OST0000-osc-MDT0000.prealloc_next_seq=0x280000401 osp.lustre-OST0000-osc-MDT0000.prealloc_reserved=0 osp.lustre-OST0000-osc-MDT0000.prealloc_status=0 osp.lustre-OST0001-osc-MDT0000.prealloc_force_new_seq=0 osp.lustre-OST0001-osc-MDT0000.prealloc_last_id=833 osp.lustre-OST0001-osc-MDT0000.prealloc_last_seq=0x2c0000401 osp.lustre-OST0001-osc-MDT0000.prealloc_next_id=806 osp.lustre-OST0001-osc-MDT0000.prealloc_next_seq=0x2c0000401 osp.lustre-OST0001-osc-MDT0000.prealloc_reserved=0 osp.lustre-OST0001-osc-MDT0000.prealloc_status=0 fail_val=-1 fail_loc=0x215 Creating to objid 833 on ost lustre-OST0000... total: 29 open/close in 0.18 seconds: 157.82 ops/second osp.lustre-OST0000-osc-MDT0000.prealloc_force_new_seq=0 osp.lustre-OST0000-osc-MDT0000.prealloc_last_id=865 osp.lustre-OST0000-osc-MDT0000.prealloc_last_seq=0x280000401 osp.lustre-OST0000-osc-MDT0000.prealloc_next_id=830 osp.lustre-OST0000-osc-MDT0000.prealloc_next_seq=0x280000401 osp.lustre-OST0000-osc-MDT0000.prealloc_reserved=0 osp.lustre-OST0000-osc-MDT0000.prealloc_status=-28 osp.lustre-OST0001-osc-MDT0000.prealloc_force_new_seq=0 osp.lustre-OST0001-osc-MDT0000.prealloc_last_id=833 osp.lustre-OST0001-osc-MDT0000.prealloc_last_seq=0x2c0000401 osp.lustre-OST0001-osc-MDT0000.prealloc_next_id=811 osp.lustre-OST0001-osc-MDT0000.prealloc_next_seq=0x2c0000401 osp.lustre-OST0001-osc-MDT0000.prealloc_reserved=0 osp.lustre-OST0001-osc-MDT0000.prealloc_status=0 fail_loc=0x215 lfs mkdir: dirstripe error on '/mnt/lustre/d27q.sanity': stripe already set lfs setdirstripe: cannot create dir '/mnt/lustre/d27q.sanity': File exists OSTIDX=1 MDTIDX=0 osp.lustre-OST0000-osc-MDT0000.prealloc_force_new_seq=0 osp.lustre-OST0000-osc-MDT0000.prealloc_last_id=865 osp.lustre-OST0000-osc-MDT0000.prealloc_last_seq=0x280000401 osp.lustre-OST0000-osc-MDT0000.prealloc_next_id=830 osp.lustre-OST0000-osc-MDT0000.prealloc_next_seq=0x280000401 osp.lustre-OST0000-osc-MDT0000.prealloc_reserved=0 osp.lustre-OST0000-osc-MDT0000.prealloc_status=-28 osp.lustre-OST0001-osc-MDT0000.prealloc_force_new_seq=0 osp.lustre-OST0001-osc-MDT0000.prealloc_last_id=833 osp.lustre-OST0001-osc-MDT0000.prealloc_last_seq=0x2c0000401 osp.lustre-OST0001-osc-MDT0000.prealloc_next_id=811 osp.lustre-OST0001-osc-MDT0000.prealloc_next_seq=0x2c0000401 osp.lustre-OST0001-osc-MDT0000.prealloc_reserved=0 osp.lustre-OST0001-osc-MDT0000.prealloc_status=-28 fail_val=-1 fail_loc=0x215 Creating to objid 833 on ost lustre-OST0001... open(/mnt/lustre/d27q.sanity/lustre-OST0001/f811) error: No space left on device total: 0 open/close in 0.01 seconds: 0.00 ops/second osp.lustre-OST0000-osc-MDT0000.prealloc_force_new_seq=0 osp.lustre-OST0000-osc-MDT0000.prealloc_last_id=865 osp.lustre-OST0000-osc-MDT0000.prealloc_last_seq=0x280000401 osp.lustre-OST0000-osc-MDT0000.prealloc_next_id=830 osp.lustre-OST0000-osc-MDT0000.prealloc_next_seq=0x280000401 osp.lustre-OST0000-osc-MDT0000.prealloc_reserved=0 osp.lustre-OST0000-osc-MDT0000.prealloc_status=-28 osp.lustre-OST0001-osc-MDT0000.prealloc_force_new_seq=0 osp.lustre-OST0001-osc-MDT0000.prealloc_last_id=833 osp.lustre-OST0001-osc-MDT0000.prealloc_last_seq=0x2c0000401 osp.lustre-OST0001-osc-MDT0000.prealloc_next_id=811 osp.lustre-OST0001-osc-MDT0000.prealloc_next_seq=0x2c0000401 osp.lustre-OST0001-osc-MDT0000.prealloc_reserved=0 osp.lustre-OST0001-osc-MDT0000.prealloc_status=-28 fail_loc=0x215 /home/green/git/lustre-release/lustre/tests/sanity.sh: line 2034: /mnt/lustre/d27q.sanity/f27q.sanity: No space left on device /mnt/lustre/d27q.sanity/f27q.sanity has size 80000000 OK fail_loc=0 Waiting for MDT destroys to complete Waiting 10s for '' pdsh@oleg329-client: oleg329-server: ssh exited with exit code 1 PASS 27q (20s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 27r: stripe file with some full OSTs (shouldn't LBUG) =========================================================== 20:37:24 (1713487044) fail_loc=0 Waiting for MDT destroys to complete pdsh@oleg329-client: oleg329-server: ssh exited with exit code 1 OSTIDX=0 MDTIDX=0 osp.lustre-OST0000-osc-MDT0000.prealloc_force_new_seq=0 osp.lustre-OST0000-osc-MDT0000.prealloc_last_id=865 osp.lustre-OST0000-osc-MDT0000.prealloc_last_seq=0x280000401 osp.lustre-OST0000-osc-MDT0000.prealloc_next_id=830 osp.lustre-OST0000-osc-MDT0000.prealloc_next_seq=0x280000401 osp.lustre-OST0000-osc-MDT0000.prealloc_reserved=0 osp.lustre-OST0000-osc-MDT0000.prealloc_status=0 osp.lustre-OST0001-osc-MDT0000.prealloc_force_new_seq=0 osp.lustre-OST0001-osc-MDT0000.prealloc_last_id=833 osp.lustre-OST0001-osc-MDT0000.prealloc_last_seq=0x2c0000401 osp.lustre-OST0001-osc-MDT0000.prealloc_next_id=811 osp.lustre-OST0001-osc-MDT0000.prealloc_next_seq=0x2c0000401 osp.lustre-OST0001-osc-MDT0000.prealloc_reserved=0 osp.lustre-OST0001-osc-MDT0000.prealloc_status=0 fail_val=0 fail_loc=0x215 Creating to objid 865 on ost lustre-OST0000... total: 37 open/close in 0.27 seconds: 136.54 ops/second osp.lustre-OST0000-osc-MDT0000.prealloc_force_new_seq=0 osp.lustre-OST0000-osc-MDT0000.prealloc_last_id=897 osp.lustre-OST0000-osc-MDT0000.prealloc_last_seq=0x280000401 osp.lustre-OST0000-osc-MDT0000.prealloc_next_id=867 osp.lustre-OST0000-osc-MDT0000.prealloc_next_seq=0x280000401 osp.lustre-OST0000-osc-MDT0000.prealloc_reserved=0 osp.lustre-OST0000-osc-MDT0000.prealloc_status=0 osp.lustre-OST0001-osc-MDT0000.prealloc_force_new_seq=0 osp.lustre-OST0001-osc-MDT0000.prealloc_last_id=833 osp.lustre-OST0001-osc-MDT0000.prealloc_last_seq=0x2c0000401 osp.lustre-OST0001-osc-MDT0000.prealloc_next_id=811 osp.lustre-OST0001-osc-MDT0000.prealloc_next_seq=0x2c0000401 osp.lustre-OST0001-osc-MDT0000.prealloc_reserved=0 osp.lustre-OST0001-osc-MDT0000.prealloc_status=0 fail_loc=0x80000215 fail_loc=0 Waiting for MDT destroys to complete pdsh@oleg329-client: oleg329-server: ssh exited with exit code 1 PASS 27r (16s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 27s: lsm_xfersize overflow (should error) (bug 10725) ========================================================== 20:37:42 (1713487062) striped dir -i1 -c2 -H crush2 /mnt/lustre/d27s.sanity lfs setstripe: error: stripe size '4294967296' over 4GB limit: Invalid argument (22) PASS 27s (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 27t: check that utils parse path correctly ========================================================== 20:37:45 (1713487065) f27t.sanity lmm_stripe_count: 1 lmm_stripe_size: 4194304 lmm_pattern: raid0 lmm_layout_gen: 0 lmm_stripe_offset: 1 obdidx objid objid group 1 812 0x32c 0x2c0000401 PASS 27t (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 27u: skip object creation on OSC w/o objects ========================================================== 20:37:48 (1713487068) fail_loc=0x139 striped dir -i1 -c2 -H fnv_1a_64 /mnt/lustre/d27u.sanity total: 1000 open/close in 1.77 seconds: 564.74 ops/second fail_loc=0 - unlinked 0 (time 1713487075 ; total 0 ; last 0) total: 1000 unlinks in 1 seconds: 1000.000000 unlinks/second Waiting for MDT destroys to complete cleanup time 9 PASS 27u (16s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 27v: skip object creation on slow OST ===== 20:38:06 (1713487086) OSTIDX=0 MDTIDX=0 osp.lustre-OST0000-osc-MDT0000.prealloc_force_new_seq=0 osp.lustre-OST0000-osc-MDT0000.prealloc_last_id=897 osp.lustre-OST0000-osc-MDT0000.prealloc_last_seq=0x280000401 osp.lustre-OST0000-osc-MDT0000.prealloc_next_id=868 osp.lustre-OST0000-osc-MDT0000.prealloc_next_seq=0x280000401 osp.lustre-OST0000-osc-MDT0000.prealloc_reserved=0 osp.lustre-OST0000-osc-MDT0000.prealloc_status=0 osp.lustre-OST0001-osc-MDT0000.prealloc_force_new_seq=0 osp.lustre-OST0001-osc-MDT0000.prealloc_last_id=1345 osp.lustre-OST0001-osc-MDT0000.prealloc_last_seq=0x2c0000401 osp.lustre-OST0001-osc-MDT0000.prealloc_next_id=1313 osp.lustre-OST0001-osc-MDT0000.prealloc_next_seq=0x2c0000401 osp.lustre-OST0001-osc-MDT0000.prealloc_reserved=0 osp.lustre-OST0001-osc-MDT0000.prealloc_status=0 fail_val=-1 fail_loc=0x215 Creating to objid 897 on ost lustre-OST0000... total: 31 open/close in 0.16 seconds: 191.10 ops/second osp.lustre-OST0000-osc-MDT0000.prealloc_force_new_seq=0 osp.lustre-OST0000-osc-MDT0000.prealloc_last_id=929 osp.lustre-OST0000-osc-MDT0000.prealloc_last_seq=0x280000401 osp.lustre-OST0000-osc-MDT0000.prealloc_next_id=899 osp.lustre-OST0000-osc-MDT0000.prealloc_next_seq=0x280000401 osp.lustre-OST0000-osc-MDT0000.prealloc_reserved=0 osp.lustre-OST0000-osc-MDT0000.prealloc_status=0 osp.lustre-OST0001-osc-MDT0000.prealloc_force_new_seq=0 osp.lustre-OST0001-osc-MDT0000.prealloc_last_id=1345 osp.lustre-OST0001-osc-MDT0000.prealloc_last_seq=0x2c0000401 osp.lustre-OST0001-osc-MDT0000.prealloc_next_id=1313 osp.lustre-OST0001-osc-MDT0000.prealloc_next_seq=0x2c0000401 osp.lustre-OST0001-osc-MDT0000.prealloc_reserved=0 osp.lustre-OST0001-osc-MDT0000.prealloc_status=0 fail_loc=0x215 lfs mkdir: dirstripe error on '/mnt/lustre/d27v.sanity': stripe already set lfs setdirstripe: cannot create dir '/mnt/lustre/d27v.sanity': File exists OSTIDX=1 MDTIDX=0 osp.lustre-OST0000-osc-MDT0000.prealloc_force_new_seq=0 osp.lustre-OST0000-osc-MDT0000.prealloc_last_id=929 osp.lustre-OST0000-osc-MDT0000.prealloc_last_seq=0x280000401 osp.lustre-OST0000-osc-MDT0000.prealloc_next_id=899 osp.lustre-OST0000-osc-MDT0000.prealloc_next_seq=0x280000401 osp.lustre-OST0000-osc-MDT0000.prealloc_reserved=0 osp.lustre-OST0000-osc-MDT0000.prealloc_status=0 osp.lustre-OST0001-osc-MDT0000.prealloc_force_new_seq=0 osp.lustre-OST0001-osc-MDT0000.prealloc_last_id=1345 osp.lustre-OST0001-osc-MDT0000.prealloc_last_seq=0x2c0000401 osp.lustre-OST0001-osc-MDT0000.prealloc_next_id=1313 osp.lustre-OST0001-osc-MDT0000.prealloc_next_seq=0x2c0000401 osp.lustre-OST0001-osc-MDT0000.prealloc_reserved=0 osp.lustre-OST0001-osc-MDT0000.prealloc_status=0 fail_val=-1 fail_loc=0x215 Creating to objid 1345 on ost lustre-OST0001... total: 34 open/close in 0.18 seconds: 194.28 ops/second osp.lustre-OST0000-osc-MDT0000.prealloc_force_new_seq=0 osp.lustre-OST0000-osc-MDT0000.prealloc_last_id=929 osp.lustre-OST0000-osc-MDT0000.prealloc_last_seq=0x280000401 osp.lustre-OST0000-osc-MDT0000.prealloc_next_id=899 osp.lustre-OST0000-osc-MDT0000.prealloc_next_seq=0x280000401 osp.lustre-OST0000-osc-MDT0000.prealloc_reserved=0 osp.lustre-OST0000-osc-MDT0000.prealloc_status=0 osp.lustre-OST0001-osc-MDT0000.prealloc_force_new_seq=0 osp.lustre-OST0001-osc-MDT0000.prealloc_last_id=1377 osp.lustre-OST0001-osc-MDT0000.prealloc_last_seq=0x2c0000401 osp.lustre-OST0001-osc-MDT0000.prealloc_next_id=1347 osp.lustre-OST0001-osc-MDT0000.prealloc_next_seq=0x2c0000401 osp.lustre-OST0001-osc-MDT0000.prealloc_reserved=0 osp.lustre-OST0001-osc-MDT0000.prealloc_status=0 fail_loc=0x215 fail_loc=0 Waiting for MDT destroys to complete Waiting 10s for '' pdsh@oleg329-client: oleg329-server: ssh exited with exit code 1 fail_loc=0x705 total: 32 open/close in 0.17 seconds: 189.24 ops/second fail_loc=0 Waiting for MDT destroys to complete pdsh@oleg329-client: oleg329-server: ssh exited with exit code 1 PASS 27v (34s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 27w: check /home/green/git/lustre-release/lustre/utils/lfs setstripe -S and getstrip -d options ========================================================== 20:38:42 (1713487122) striped dir -i1 -c2 -H fnv_1a_64 /mnt/lustre/d27w.sanity PASS 27w (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 27wa: check /home/green/git/lustre-release/lustre/utils/lfs setstripe -c -i options ========================================================== 20:38:45 (1713487125) striped dir -i1 -c2 -H crush /mnt/lustre/d27wa.sanity PASS 27wa (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 27x: create files while OST0 is degraded == 20:38:48 (1713487128) striped dir -i1 -c2 -H fnv_1a_64 /mnt/lustre/d27x.sanity total: 2 open/close in 0.02 seconds: 114.26 ops/second PASS 27x (12s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 27y: create files while OST0 is degraded and the rest inactive ========================================================== 20:39:02 (1713487142) lustre-OST0001-osc-MDT0001 is Deactivated: lustre-OST0001-osc-MDT0000 is Deactivated: striped dir -i1 -c2 -H crush /mnt/lustre/d27y.sanity lustre-OST0000 is degraded: lustre-OST0000 is degraded: total: 2 open/close in 0.02 seconds: 116.29 ops/second lustre-OST0000 is recovered from degraded: lustre-OST0000 is recovered from degraded: PASS 27y (25s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 27z: check SEQ/OID on the MDT and OST filesystems ========================================================== 20:39:28 (1713487168) striped dir -i1 -c2 -H all_char /mnt/lustre/d27z.sanity 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0321538 s, 32.6 MB/s 2+0 records in 2+0 records out 2097152 bytes (2.1 MB) copied, 0.0587775 s, 35.7 MB/s check file /mnt/lustre/d27z.sanity/f27z.sanity-1 FID seq 0x200000402, oid 0xd509 ver 0x0 LOV seq 0x200000402, oid 0xd509, count: 1 want: stripe:0 ost:0 oid:921/0x399 seq:0x280000401 fid: parent=[0x200000402:0xd509:0x0] stripe=0 stripe_size=65536 stripe_count=1 layout_version=0 range=0 check file /mnt/lustre/d27z.sanity/f27z.sanity-2 FID seq 0x240000402, oid 0xfb04 ver 0x0 LOV seq 0x240000402, oid 0xfb04, count: 2 want: stripe:0 ost:1 oid:742/0x2e6 seq:0x2c0000400 fid: parent=[0x240000402:0xfb04:0x0] stripe=0 stripe_size=1048576 stripe_count=2 layout_version=0 range=0 want: stripe:1 ost:0 oid:291/0x123 seq:0x280000400 fid: parent=[0x240000402:0xfb04:0x0] stripe=1 stripe_size=1048576 stripe_count=2 layout_version=0 range=0 PASS 27z (11s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 27A: check filesystem-wide default LOV EA values ========================================================== 20:39:41 (1713487181) PASS 27A (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 27B: call setstripe on open unlinked file/rename victim ========================================================== 20:39:44 (1713487184) striped dir -i1 -c2 -H fnv_1a_64 /mnt/lustre/d27B.sanity LL_IOC_LOV_SETSTRIPE: Stale file handle LL_IOC_LOV_SETSTRIPE: Stale file handle PASS 27B (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 27Ca: check full striping across all OSTs ========================================================== 20:39:47 (1713487187) striped dir -i1 -c2 -H crush2 /mnt/lustre/d27Ca.sanity OST Index: 0 1 OST Index: 1 0 PASS 27Ca (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 27Cb: more stripes than OSTs with -C ====== 20:39:50 (1713487190) striped dir -i1 -c2 -H crush /mnt/lustre/d27Cb.sanity lmm_pattern: raid0,overstriped 4+0 records in 4+0 records out 4194304 bytes (4.2 MB) copied, 0.113055 s, 37.1 MB/s PASS 27Cb (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 27Cc: fewer stripes than OSTs does not set overstriping ========================================================== 20:39:53 (1713487193) striped dir -i1 -c2 -H crush /mnt/lustre/d27Cc.sanity 4+0 records in 4+0 records out 4194304 bytes (4.2 MB) copied, 0.115514 s, 36.3 MB/s PASS 27Cc (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 27Cd: test maximum stripe count =========== 20:39:57 (1713487197) osp.lustre-OST0001-osc-MDT0000.prealloc_force_new_seq=1 osp.lustre-OST0001-osc-MDT0001.prealloc_force_new_seq=1 osp.lustre-OST0000-osc-MDT0001.prealloc_force_new_seq=1 osp.lustre-OST0000-osc-MDT0000.prealloc_force_new_seq=1 Creating to objid 1409 on ost lustre-OST0001... Creating to objid 961 on ost lustre-OST0000... Creating to objid 321 on ost lustre-OST0000... Creating to objid 769 on ost lustre-OST0001... total: 23 open/close in 0.17 seconds: 139.07 ops/second total: 27 open/close in 0.20 seconds: 133.84 ops/second total: 41 open/close in 0.29 seconds: 142.58 ops/second total: 43 open/close in 0.31 seconds: 139.51 ops/second osp.lustre-OST0001-osc-MDT0001.prealloc_force_new_seq=0 osp.lustre-OST0000-osc-MDT0001.prealloc_force_new_seq=0 osp.lustre-OST0000-osc-MDT0000.prealloc_force_new_seq=0 osp.lustre-OST0001-osc-MDT0000.prealloc_force_new_seq=0 striped dir -i1 -c2 -H crush /mnt/lustre/d27Cd.sanity lmm_pattern: raid0,overstriped 4+0 records in 4+0 records out 4194304 bytes (4.2 MB) copied, 0.190559 s, 22.0 MB/s PASS 27Cd (15s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 27Ce: test pool with overstriping ========= 20:40:14 (1713487214) striped dir -i1 -c2 -H fnv_1a_64 /mnt/lustre/d27Ce.sanity Creating new pool oleg329-server: Pool lustre.test_27Ce created Adding targets to pool oleg329-server: OST lustre-OST0000_UUID added to pool lustre.test_27Ce Waiting 90s for 'lustre-OST0000_UUID ' lmm_pattern: raid0,overstriped 4+0 records in 4+0 records out 4194304 bytes (4.2 MB) copied, 0.130546 s, 32.1 MB/s Destroy the created pools: test_27Ce lustre.test_27Ce oleg329-server: OST lustre-OST0000_UUID removed from pool lustre.test_27Ce oleg329-server: Pool lustre.test_27Ce destroyed Waiting 90s for 'foo' PASS 27Ce (13s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 27Cf: test default inheritance with overstriping ========================================================== 20:40:29 (1713487229) striped dir -i1 -c2 -H crush /mnt/lustre/d27Cf.sanity lmm_pattern: raid0,overstriped 4+0 records in 4+0 records out 4194304 bytes (4.2 MB) copied, 0.0941754 s, 44.5 MB/s PASS 27Cf (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 27Cg: test setstripe with wrong OST idx === 20:40:32 (1713487232) lfs setstripe: setstripe error for '/mnt/lustre/f27Cg.sanity': Invalid argument PASS 27Cg (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 27Ci: add an overstriping component ======= 20:40:35 (1713487235) raid0,overstriped 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 0.327092 s, 32.1 MB/s PASS 27Ci (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 27D: validate llapi_layout API ============ 20:40:39 (1713487239) striped dir -i1 -c2 -H crush /mnt/lustre/d27D.sanity Creating new pool oleg329-server: Pool lustre.testpool created Adding targets to pool oleg329-server: OST lustre-OST0000_UUID added to pool lustre.testpool oleg329-server: OST lustre-OST0001_UUID added to pool lustre.testpool Waiting 90s for 'lustre-OST0000_UUID lustre-OST0001_UUID ' Updated after 2s: want 'lustre-OST0000_UUID lustre-OST0001_UUID ' got 'lustre-OST0000_UUID lustre-OST0001_UUID ' test 0: Read/write layout attributes then create a file ................................. pass test 1: Read test0 file by path and verify attributes ................................... pass test 2: Read test0 file by FD and verify attributes ..................................... pass test 3: Read test0 file by FID and verify attributes .................................... pass test 4: Verify compatibility with 'lfs setstripe' ....................................... pass test 5: llapi_layout_get_by_path ENOENT handling ........................................ pass test 6: llapi_layout_get_by_fd EBADF handling ........................................... pass test 7: llapi_layout_get_by_path EACCES handling ........................................ pass test 8: llapi_layout_get_by_path ENODATA handling ....................................... pass test 9: verify llapi_layout_pattern_set() return values ................................. pass test 10: stripe_count error handling ..................................................... pass test 11: stripe_size error handling ...................................................... pass test 12: pool_name error handling ........................................................ pass test 13: ost_index error handling ........................................................ pass test 14: llapi_layout_file_create error handling ......................................... pass test 15: Can't change striping attributes of existing file ............................... pass test 16: Default stripe attributes are applied as expected ............................... pass test 17: LLAPI_LAYOUT_WIDE is honored .................................................... pass test 18: Setting pool with fsname.pool notation .......................................... pass test 19: Maximum length pool name is NULL-terminated ..................................... pass test 20: LLAPI_LAYOUT_DEFAULT is honored ................................................. pass test 21: llapi_layout_file_create fails for non-Lustre file .............................. pass test 22: llapi_layout_file_create applied mode correctly ................................. pass test 23: llapi_layout_get_by_path fails for non-Lustre file .............................. pass test 24: LAYOUT_GET_EXPECTED works with existing file .................................... pass test 25: LAYOUT_GET_EXPECTED works with directory ........................................ pass test 26: LAYOUT_GET_EXPECTED partially specified parent .................................. pass test 27: LAYOUT_GET_EXPECTED with non existing file ...................................... pass test 28: LLAPI_LAYOUT_WIDE returned as expected .......................................... pass test 29: set ost index to non-zero stripe number ......................................... pass test 30: create composite file, traverse components ...................................... pass test 31: add/delete component to/from existing file ...................................... pass test 32: Test overstriping with layout_file_create ....................................... pass test 33: Test overstriping with llapi_file_open .......................................... pass test 34: create simple valid & invalid self extending layouts ............................ skip Destroy the created pools: testpool lustre.testpool oleg329-server: OST lustre-OST0000_UUID removed from pool lustre.testpool oleg329-server: OST lustre-OST0001_UUID removed from pool lustre.testpool oleg329-server: Pool lustre.testpool destroyed PASS 27D (14s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 27E: check that default extended attribute size properly increases ========================================================== 20:40:55 (1713487255) -rw-r--r-- 1 root root 0 Apr 18 20:40 /mnt/lustre/f27E.sanity-1 PASS 27E (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 27F: Client resend delayed layout creation with non-zero size ========================================================== 20:40:59 (1713487259) striped dir -i1 -c2 -H fnv_1a_64 /mnt/lustre/d27F.sanity Stopping /mnt/lustre-ost1 (opts:) on oleg329-server Stopping /mnt/lustre-ost2 (opts:) on oleg329-server /mnt/lustre/d27F.sanity/f0 has size 1050000 OK Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg329-server: oleg329-server.virtnet: executing set_default_debug all all pdsh@oleg329-client: oleg329-server: ssh exited with exit code 1 Started lustre-OST0000 Starting ost2: -o localrecov /dev/mapper/ost2_flakey /mnt/lustre-ost2 seq.cli-lustre-OST0001-super.width=65536 oleg329-server: oleg329-server.virtnet: executing set_default_debug all all pdsh@oleg329-client: oleg329-server: ssh exited with exit code 1 Started lustre-OST0001 PASS 27F (23s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 27G: Clear OST pool from stripe =========== 20:41:23 (1713487283) striped dir -i1 -c2 -H crush2 /mnt/lustre/d27G.sanity Creating new pool oleg329-server: Pool lustre.testpool created Adding targets to pool oleg329-server: OST lustre-OST0000_UUID added to pool lustre.testpool /mnt/lustre/d27G.sanity/f27G.sanity.default /mnt/lustre/d27G.sanity/f27G.sanity.pfl Destroy the created pools: testpool lustre.testpool oleg329-server: OST lustre-OST0000_UUID removed from pool lustre.testpool oleg329-server: Pool lustre.testpool destroyed PASS 27G (14s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 27H: Set specific OSTs stripe ============= 20:41:39 (1713487299) SKIP: sanity test_27H needs >= 3 OSTs SKIP 27H (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 27I: check that root dir striping does not break parent dir one ========================================================== 20:41:41 (1713487301) Creating new pool oleg329-server: Pool lustre.test_27I created Adding targets to pool oleg329-server: OST lustre-OST0001_UUID added to pool lustre.test_27I striped dir -i1 -c2 -H all_char /mnt/lustre/d27I.sanity /mnt/lustre/d27I.sanity/f27I.sanity lmm_stripe_count: 1 lmm_stripe_size: 4194304 lmm_pattern: raid0 lmm_layout_gen: 0 lmm_stripe_offset: 1 lmm_pool: test_27I obdidx objid objid group 1 999 0x3e7 0x2c0000402 Destroy the created pools: test_27I lustre.test_27I oleg329-server: OST lustre-OST0001_UUID removed from pool lustre.test_27I oleg329-server: Pool lustre.test_27I destroyed PASS 27I (12s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 27J: basic ops on file with foreign LOV === 20:41:55 (1713487315) striped dir -i1 -c2 -H all_char /mnt/lustre/d27J.sanity lfs setstripe setstripe: hex flags must be specified with --foreign option Create a file with specified striping/composite layout, or set the default layout on an existing directory. Usage: setstripe [--component-add|--component-del|--delete|-d] [--comp-set --comp-id|-I COMP_ID|--comp-flags=COMP_FLAGS] [--component-end|-E END_OFFSET] [--copy=SOURCE_LAYOUT_FILE]|--yaml|-y YAML_TEMPLATE_FILE] [--extension-size|--ext-size|-z EXT_SIZE] [--help|-h] [--foreign=FOREIGN_TYPE --xattr|-x LAYOUT] [--layout|-L PATTERN] [--mode FILE_MODE] [--mirror-count|-N[MIRROR_COUNT]] [--ost|-o OST_INDEX[,OST_INDEX,...]] [--overstripe-count|-C STRIPE_COUNT] [--pool|-p POOL_NAME] [--stripe-count|-c STRIPE_COUNT] [--stripe-index|-i START_OST_IDX] [--stripe-size|-S STRIPE_SIZE] FILENAME|DIRECTORY lfs setstripe setstripe: invalid hex flags 'foo' Create a file with specified striping/composite layout, or set the default layout on an existing directory. Usage: setstripe [--component-add|--component-del|--delete|-d] [--comp-set --comp-id|-I COMP_ID|--comp-flags=COMP_FLAGS] [--component-end|-E END_OFFSET] [--copy=SOURCE_LAYOUT_FILE]|--yaml|-y YAML_TEMPLATE_FILE] [--extension-size|--ext-size|-z EXT_SIZE] [--help|-h] [--foreign=FOREIGN_TYPE --xattr|-x LAYOUT] [--layout|-L PATTERN] [--mode FILE_MODE] [--mirror-count|-N[MIRROR_COUNT]] [--ost|-o OST_INDEX[,OST_INDEX,...]] [--overstripe-count|-C STRIPE_COUNT] [--pool|-p POOL_NAME] [--stripe-count|-c STRIPE_COUNT] [--stripe-index|-i START_OST_IDX] [--stripe-size|-S STRIPE_SIZE] FILENAME|DIRECTORY lfs setstripe setstripe: invalid hex flags '0xffffffff' Create a file with specified striping/composite layout, or set the default layout on an existing directory. Usage: setstripe [--component-add|--component-del|--delete|-d] [--comp-set --comp-id|-I COMP_ID|--comp-flags=COMP_FLAGS] [--component-end|-E END_OFFSET] [--copy=SOURCE_LAYOUT_FILE]|--yaml|-y YAML_TEMPLATE_FILE] [--extension-size|--ext-size|-z EXT_SIZE] [--help|-h] [--foreign=FOREIGN_TYPE --xattr|-x LAYOUT] [--layout|-L PATTERN] [--mode FILE_MODE] [--mirror-count|-N[MIRROR_COUNT]] [--ost|-o OST_INDEX[,OST_INDEX,...]] [--overstripe-count|-C STRIPE_COUNT] [--pool|-p POOL_NAME] [--stripe-count|-c STRIPE_COUNT] [--stripe-index|-i START_OST_IDX] [--stripe-size|-S STRIPE_SIZE] FILENAME|DIRECTORY lov_foreign_magic: 0x0BD70BD0 lov_xattr_size: 89 lov_foreign_size: 73 lov_foreign_type: 1 lov_foreign_flags: 0x0000DA08 lfm_magic: 0x0BD70BD0 lfm_length: 73 lfm_type: 0x00000000 (none) lfm_flags: 0x0000DA08 lfm_value: '6ab04ec9-da39-4e69-83b5-00dffffb7041@6b0865ae-cf67-47d2-9577-790e69252b0d' lfs setstripe: setstripe error for '/mnt/lustre/d27J.sanity/f27J.sanity': stripe already set lfs setstripe: setstripe error for '/mnt/lustre/d27J.sanity/f27J.sanity2': stripe already set cat: /mnt/lustre/d27J.sanity/f27J.sanity: No data available cat: /mnt/lustre/d27J.sanity/f27J.sanity2: No data available cat: write error: Bad file descriptor cat: write error: Bad file descriptor PASS 27J (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 27K: basic ops on dir with foreign LMV ==== 20:41:59 (1713487319) striped dir -i1 -c2 -H crush /mnt/lustre/d27K.sanity lfs setdirstripe: hex flags must be specified with --foreign option Create striped directory on specified MDT, same as mkdir. May be restricted to root or group users, depending on settings. usage: setdirstripe [OPTION] [--mdt-count|-c stripe_count> [--help|-h] [--mdt-hash|-H mdt_hash] [--mdt-index|-i mdt_index[,mdt_index,...] [--mdt-overcount|-C stripe_count> [--default|-D] [--mode|-o mode] [--max-inherit|-X max_inherit] [--max-inherit-rr max_inherit_rr] To create dir with a foreign (free format) layout : setdirstripe|mkdir --foreign[=FOREIGN_TYPE] -x|-xattr STRING [--mode|-o MODE] [--flags HEX] DIRECTORY lfs setdirstripe: invalid hex flags 'foo' Create striped directory on specified MDT, same as mkdir. May be restricted to root or group users, depending on settings. usage: setdirstripe [OPTION] [--mdt-count|-c stripe_count> [--help|-h] [--mdt-hash|-H mdt_hash] [--mdt-index|-i mdt_index[,mdt_index,...] [--mdt-overcount|-C stripe_count> [--default|-D] [--mode|-o mode] [--max-inherit|-X max_inherit] [--max-inherit-rr max_inherit_rr] To create dir with a foreign (free format) layout : setdirstripe|mkdir --foreign[=FOREIGN_TYPE] -x|-xattr STRING [--mode|-o MODE] [--flags HEX] DIRECTORY lfs setdirstripe: invalid hex flags '0xffffffff' Create striped directory on specified MDT, same as mkdir. May be restricted to root or group users, depending on settings. usage: setdirstripe [OPTION] [--mdt-count|-c stripe_count> [--help|-h] [--mdt-hash|-H mdt_hash] [--mdt-index|-i mdt_index[,mdt_index,...] [--mdt-overcount|-C stripe_count> [--default|-D] [--mode|-o mode] [--max-inherit|-X max_inherit] [--max-inherit-rr max_inherit_rr] To create dir with a foreign (free format) layout : setdirstripe|mkdir --foreign[=FOREIGN_TYPE] -x|-xattr STRING [--mode|-o MODE] [--flags HEX] DIRECTORY lmv_foreign_magic: 0xcd50cd0 lmv_xattr_size: 89 lmv_foreign_type: 1 lmv_foreign_flags: 55813 lfm_magic: 0x0CD50CD0 lfm_length: 73 lfm_type: 0x00000000 (none) lfm_flags: 0x0000DA05 lfm_value: '7ddc8bfb-4776-4857-84ce-2167fa410c70@9a6c9980-4166-4845-aac6-5add468add6b' lfm_magic: 0x0CD50CD0 lfm_length: 73 lfm_type: 0x00000000 (none) lfm_flags: 0x0000DA05 lfm_value: '7ddc8bfb-4776-4857-84ce-2167fa410c70@9a6c9980-4166-4845-aac6-5add468add6b' touch: cannot touch '/mnt/lustre/d27K.sanity/d27K.sanity/f27K.sanity': No data available touch: cannot touch '/mnt/lustre/d27K.sanity/d27K.sanity2/f27K.sanity': No data available PASS 27K (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 27L: lfs pool_list gives correct pool name ========================================================== 20:42:02 (1713487322) Creating new pool oleg329-server: Pool lustre.test_27L created lustre.test_27L Destroy the created pools: test_27L lustre.test_27L oleg329-server: Pool lustre.test_27L destroyed PASS 27L (9s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 27M: test O_APPEND striping =============== 20:42:13 (1713487333) striped dir -i1 -c2 -H crush /mnt/lustre/d27M.sanity mdd.lustre-MDT0000.append_stripe_count=0 mdd.lustre-MDT0001.append_stripe_count=0 mdd.lustre-MDT0000.append_stripe_count=2 mdd.lustre-MDT0001.append_stripe_count=2 mdd.lustre-MDT0000.append_stripe_count=-1 mdd.lustre-MDT0001.append_stripe_count=-1 mdd.lustre-MDT0000.append_stripe_count=1 mdd.lustre-MDT0001.append_stripe_count=1 Creating new pool oleg329-server: Pool lustre.test_27M created Adding targets to pool oleg329-server: OST lustre-OST0000_UUID added to pool lustre.test_27M oleg329-server: OST lustre-OST0001_UUID added to pool lustre.test_27M mdd.lustre-MDT0000.append_pool=test_27M mdd.lustre-MDT0001.append_pool=test_27M mdd.lustre-MDT0000.append_stripe_count=0 mdd.lustre-MDT0001.append_stripe_count=0 mdd.lustre-MDT0000.append_pool=none mdd.lustre-MDT0001.append_pool=none Destroy the created pools: test_27M lustre.test_27M oleg329-server: OST lustre-OST0000_UUID removed from pool lustre.test_27M oleg329-server: OST lustre-OST0001_UUID removed from pool lustre.test_27M oleg329-server: Pool lustre.test_27M destroyed mdd.lustre-MDT0000.append_stripe_count=1 mdd.lustre-MDT0001.append_stripe_count=1 mdd.lustre-MDT0000.append_pool=none mdd.lustre-MDT0001.append_pool=none PASS 27M (18s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 27N: lctl pool_list on separate MGS gives correct pool name ========================================================== 20:42:33 (1713487353) SKIP: sanity test_27N needs separate MGS/MDT SKIP 27N (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 27O: basic ops on foreign file of symlink type ========================================================== 20:42:35 (1713487355) striped dir -i1 -c2 -H fnv_1a_64 /mnt/lustre/d27O.sanity llite.lustre-ffff8800b6384000.foreign_symlink_enable=1 lfm_magic: 0x0BD70BD0 lfm_type: 0x0000DA05 (symlink) lfm_flags: 0x0000DA05 lfm_value: '41053567-0741-4846-9d4a-d9ae1cd3807f/a18b53ce-2f40-47a4-adee-4d1e7fe85206' lfs setstripe: cannot resolve path '/mnt/lustre/d27O.sanity/f27O.sanity': No such file or directory (2) lfs setstripe: '/mnt/lustre/d27O.sanity/f27O.sanity' is not on a Lustre filesystem: No such file or directory (2) cat: /mnt/lustre/d27O.sanity/f27O.sanity: No such file or directory /home/green/git/lustre-release/lustre/tests/sanity.sh: line 3271: /mnt/lustre/d27O.sanity/f27O.sanity: No such file or directory rm: cannot remove '/mnt/lustre/d27O.sanity/f27O.sanity.new': Operation not permitted llite.lustre-ffff8800b6384000.foreign_symlink_prefix=/tmp/ /mnt/lustre/d27O.sanity/f27O.sanity.new has type link OK /mnt/lustre/d27O.sanity/f27O.sanity.new links to /tmp/41053567-0741-4846-9d4a-d9ae1cd3807f/a18b53ce-2f40-47a4-adee-4d1e7fe85206 OK FOOFOO lfm_value: '41053567-0741-4846-9d4a-d9ae1cd3807f/a18b53ce-2f40-47a4-adee-4d1e7fe85206' rm: cannot remove '/mnt/lustre/d27O.sanity/f27O.sanity': Operation not permitted llite.lustre-ffff8800b6384000.foreign_symlink_enable=0 lfs unlink_foreign: unable to open '/mnt/lustre/d27O.sanity/*': No such file or directory (2) error: unlink_foreign: unlink foreign entry '/mnt/lustre/d27O.sanity/*' failed PASS 27O (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 27P: basic ops on foreign dir of foreign_symlink type ========================================================== 20:42:39 (1713487359) striped dir -i1 -c2 -H all_char /mnt/lustre/d27P.sanity llite.lustre-ffff8800b6384000.foreign_symlink_enable=1 lfm_magic: 0x0CD50CD0 lfm_length: 73 lfm_type: 0x0000DA05 (symlink) lfm_flags: 0x0000DA05 lfm_value: 'a863da1f-49bc-438a-a5c8-cb2c643f5f7a/1149ce8a-84ae-4e6c-8596-09c22b51c394' lfm_magic: 0x0CD50CD0 lfm_type: 0x0000DA05 (symlink) lfm_flags: 0x0000DA05 lfm_value: 'a863da1f-49bc-438a-a5c8-cb2c643f5f7a/1149ce8a-84ae-4e6c-8596-09c22b51c394' touch: cannot touch '/mnt/lustre/d27P.sanity/d27P.sanity/f27P.sanity': No such file or directory rmdir: failed to remove '/mnt/lustre/d27P.sanity/d27P.sanity.new': Not a directory llite.lustre-ffff8800b6384000.foreign_symlink_prefix=/tmp/ /mnt/lustre/d27P.sanity/d27P.sanity.new has type link OK /mnt/lustre/d27P.sanity/d27P.sanity.new links to /tmp/a863da1f-49bc-438a-a5c8-cb2c643f5f7a/1149ce8a-84ae-4e6c-8596-09c22b51c394 OK FOOFOO lfm_value: 'a863da1f-49bc-438a-a5c8-cb2c643f5f7a/1149ce8a-84ae-4e6c-8596-09c22b51c394' rmdir: failed to remove '/mnt/lustre/d27P.sanity/d27P.sanity': Not a directory llite.lustre-ffff8800b6384000.foreign_symlink_enable=0 lfs unlink_foreign: unable to open '/mnt/lustre/d27P.sanity/*': No such file or directory (2) error: unlink_foreign: unlink foreign entry '/mnt/lustre/d27P.sanity/*' failed PASS 27P (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 27Q: llapi_file_get_stripe() works on symlinks ========================================================== 20:42:42 (1713487362) striped dir -i1 -c2 -H fnv_1a_64 /mnt/lustre/d27Q.sanity-1 striped dir -i1 -c2 -H all_char /mnt/lustre/d27Q.sanity-2 lmm_magic: v1 stripe_count: 1 stripe_size: 4194304 lmm_magic: v1 stripe_count: 1 stripe_size: 4194304 lmm_magic: v1 stripe_count: 1 stripe_size: 4194304 PASS 27Q (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 27R: test max_stripecount limitation when stripe count is set to -1 ========================================================== 20:42:45 (1713487365) striped dir -i1 -c2 -H crush2 /mnt/lustre/d27R.sanity oleg329-server: error: set_param: setting /sys/fs/lustre/lod/lustre-MDT0000-mdtlov/max_stripecount=-1: Numerical result out of range oleg329-server: error: set_param: setting /sys/fs/lustre/lod/lustre-MDT0001-mdtlov/max_stripecount=-1: Numerical result out of range oleg329-server: error: set_param: setting 'lod/*/max_stripecount'='-1': Numerical result out of range pdsh@oleg329-client: oleg329-server: ssh exited with exit code 34 lod.lustre-MDT0000-mdtlov.max_stripecount=1 lod.lustre-MDT0001-mdtlov.max_stripecount=1 lod.lustre-MDT0000-mdtlov.max_stripecount=0 lod.lustre-MDT0001-mdtlov.max_stripecount=0 PASS 27R (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 27T: no eio on close on partial write due to enosp ========================================================== 20:42:49 (1713487369) fail_loc=0x20000411 fail_val=1 fail_loc=0x80000215 PASS 27T (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 27U: append pool and stripe count work with composite default layout ========================================================== 20:42:55 (1713487375) Creating new pool oleg329-server: Pool lustre.test_27U-append created Adding targets to pool oleg329-server: OST lustre-OST0000_UUID added to pool lustre.test_27U-append oleg329-server: OST lustre-OST0001_UUID added to pool lustre.test_27U-append Creating new pool oleg329-server: Pool lustre.test_27U-normal created Waiting 90s for '' Adding targets to pool oleg329-server: OST lustre-OST0000_UUID added to pool lustre.test_27U-normal oleg329-server: OST lustre-OST0001_UUID added to pool lustre.test_27U-normal striped dir -i1 -c2 -H all_char /mnt/lustre/d27U.sanity /mnt/lustre/d27U.sanity/f27U.sanity.1 lmm_stripe_count: 1 lmm_stripe_size: 4194304 lmm_pattern: raid0 lmm_layout_gen: 0 lmm_stripe_offset: 1 lmm_pool: test_27U-normal obdidx objid objid group 1 138 0x8a 0x2c0000403 mdd.lustre-MDT0000.append_pool=test_27U-append mdd.lustre-MDT0001.append_pool=test_27U-append /mnt/lustre/d27U.sanity/f27U.sanity.2 lmm_stripe_count: 1 lmm_stripe_size: 4194304 lmm_pattern: raid0 lmm_layout_gen: 0 lmm_stripe_offset: 1 lmm_pool: test_27U-append obdidx objid objid group 1 1073 0x431 0x2c0000402 mdd.lustre-MDT0000.append_stripe_count=2 mdd.lustre-MDT0001.append_stripe_count=2 /mnt/lustre/d27U.sanity/f27U.sanity.3 lmm_stripe_count: 2 lmm_stripe_size: 4194304 lmm_pattern: raid0 lmm_layout_gen: 0 lmm_stripe_offset: 0 lmm_pool: test_27U-append obdidx objid objid group 0 117 0x75 0x280000bd1 1 139 0x8b 0x2c0000403 Destroy the created pools: test_27U-append,test_27U-normal lustre.test_27U-append oleg329-server: OST lustre-OST0000_UUID removed from pool lustre.test_27U-append oleg329-server: OST lustre-OST0001_UUID removed from pool lustre.test_27U-append oleg329-server: Pool lustre.test_27U-append destroyed lustre.test_27U-normal oleg329-server: OST lustre-OST0000_UUID removed from pool lustre.test_27U-normal oleg329-server: OST lustre-OST0001_UUID removed from pool lustre.test_27U-normal oleg329-server: Pool lustre.test_27U-normal destroyed mdd.lustre-MDT0000.append_stripe_count=1 mdd.lustre-MDT0001.append_stripe_count=1 mdd.lustre-MDT0000.append_pool=none mdd.lustre-MDT0001.append_pool=none PASS 27U (26s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 27V: creating widely striped file races with deactivating OST ========================================================== 20:43:23 (1713487403) SKIP: sanity test_27V needs >= 4 OSTs SKIP 27V (0s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 28: create/mknod/mkdir with bad file types ====================================================================== 20:43:25 (1713487405) striped dir -i0 -c2 -H crush2 /mnt/lustre/d28 createtest: SUCCESS PASS 28 (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 29: IT_GETATTR regression ====================================================================================== 20:43:28 (1713487408) striped dir -i1 -c2 -H fnv_1a_64 /mnt/lustre/d29 first d29 total 0 -rw-r--r-- 1 root root 0 Apr 18 20:43 foo second d29 total 0 -rw-r--r-- 1 root root 0 Apr 18 20:43 foo done PASS 29 (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 30a: execute binary from Lustre (execve) ======================================================================== 20:43:34 (1713487414) bin boot dev etc home lib lib64 lost+found media mnt opt proc root run sbin srv sys tmp usr var PASS 30a (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 30b: execute binary from Lustre as non-root ===================================================================== 20:43:37 (1713487417) running as uid/gid/euid/egid 500/500/500/500, groups: [/mnt/lustre/ls] [/] bin boot dev etc home lib lib64 lost+found media mnt opt proc root run sbin srv sys tmp usr var PASS 30b (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 30c: execute binary from Lustre without read perms ============================================================== 20:43:40 (1713487420) running as uid/gid/euid/egid 500/500/500/500, groups: [/mnt/lustre/ls] [/] bin boot dev etc home lib lib64 lost+found media mnt opt proc root run sbin srv sys tmp usr var PASS 30c (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 30d: execute binary from Lustre while clear locks ========================================================== 20:43:43 (1713487423) ldlm.namespaces.lustre-MDT0000-mdc-ffff8800b6384000.lru_size=clear ldlm.namespaces.lustre-MDT0001-mdc-ffff8800b6384000.lru_size=clear 128+0 records in 128+0 records out 134217728 bytes (134 MB) copied, 3.28932 s, 40.8 MB/s ldlm.namespaces.lustre-MDT0000-mdc-ffff8800b6384000.lru_size=clear ldlm.namespaces.lustre-MDT0001-mdc-ffff8800b6384000.lru_size=clear 128+0 records in 128+0 records out 134217728 bytes (134 MB) copied, 3.45078 s, 38.9 MB/s ldlm.namespaces.lustre-MDT0000-mdc-ffff8800b6384000.lru_size=clear ldlm.namespaces.lustre-MDT0001-mdc-ffff8800b6384000.lru_size=clear 128+0 records in 128+0 records out 134217728 bytes (134 MB) copied, 3.42146 s, 39.2 MB/s ldlm.namespaces.lustre-MDT0000-mdc-ffff8800b6384000.lru_size=clear ldlm.namespaces.lustre-MDT0001-mdc-ffff8800b6384000.lru_size=clear 128+0 records in 128+0 records out 134217728 bytes (134 MB) copied, 3.4589 s, 38.8 MB/s ldlm.namespaces.lustre-MDT0000-mdc-ffff8800b6384000.lru_size=clear ldlm.namespaces.lustre-MDT0001-mdc-ffff8800b6384000.lru_size=clear 128+0 records in 128+0 records out 134217728 bytes (134 MB) copied, 3.39945 s, 39.5 MB/s ldlm.namespaces.lustre-MDT0000-mdc-ffff8800b6384000.lru_size=clear ldlm.namespaces.lustre-MDT0001-mdc-ffff8800b6384000.lru_size=clear 128+0 records in 128+0 records out 134217728 bytes (134 MB) copied, 3.71583 s, 36.1 MB/s ldlm.namespaces.lustre-MDT0000-mdc-ffff8800b6384000.lru_size=clear ldlm.namespaces.lustre-MDT0001-mdc-ffff8800b6384000.lru_size=clear 128+0 records in 128+0 records out 134217728 bytes (134 MB) copied, 3.34584 s, 40.1 MB/s ldlm.namespaces.lustre-MDT0000-mdc-ffff8800b6384000.lru_size=clear ldlm.namespaces.lustre-MDT0001-mdc-ffff8800b6384000.lru_size=clear 128+0 records in 128+0 records out 134217728 bytes (134 MB) copied, 3.70363 s, 36.2 MB/s ldlm.namespaces.lustre-MDT0000-mdc-ffff8800b6384000.lru_size=clear ldlm.namespaces.lustre-MDT0001-mdc-ffff8800b6384000.lru_size=clear 128+0 records in 128+0 records out 134217728 bytes (134 MB) copied, 3.71898 s, 36.1 MB/s ldlm.namespaces.lustre-MDT0000-mdc-ffff8800b6384000.lru_size=clear ldlm.namespaces.lustre-MDT0001-mdc-ffff8800b6384000.lru_size=clear 128+0 records in 128+0 records out 134217728 bytes (134 MB) copied, 3.45707 s, 38.8 MB/s PASS 30d (44s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 31a: open-unlink file ============================================================================================ 20:44:28 (1713487468) opening writing unlinking /mnt/lustre/f31 accessing (1) seeking (1) accessing (2) fstat... reading comparing data truncating seeking (2) writing again seeking (3) reading again comparing data again closing SUCCESS - goto beer /mnt/lustre/f31: absent OK PASS 31a (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 31b: unlink file with multiple links while open ================================================================= 20:44:31 (1713487471) /mnt/lustre/f31 has type file OK PASS 31b (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 31c: open-unlink file with multiple links ======================================================================= 20:44:34 (1713487474) multiop /mnt/lustre/f31 vO_uc TMPPIPE=/tmp/multiop_open_wait_pipe.7531 PASS 31c (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 31d: remove of open directory =================================================================================== 20:44:37 (1713487477) creating directory /mnt/lustre/d31d opening directory unlinking /mnt/lustre/d31d Ok, everything goes well. /mnt/lustre/d31d: absent OK PASS 31d (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 31e: remove of open non-empty directory ========================================================================= 20:44:40 (1713487480) creating directory /mnt/lustre/d31e creating file /mnt/lustre/d31e/0 opening directory unlinking /mnt/lustre/d31e Ok, everything goes well. PASS 31e (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 31f: remove of open directory with open-unlink file ============================================================= 20:44:43 (1713487483) + test_mkdir /mnt/lustre/d31f + local path + local p_option + local hash_type + hash_name=("all_char" "fnv_1a_64" "crush") + local hash_name + local dirstripe_count=2 + local dirstripe_index=1 + local OPTIND=1 + local overstripe_count + local stripe_command=-c ++ version_code 2.15.0 +++ tr '[:punct:][a-zA-Z]' ' ' ++ eval set -- 2 15 0 +++ set -- 2 15 0 ++ echo -n 34537472 + (( 34553369 > 34537472 )) + hash_name+=("crush2") + getopts c:C:H:i:p opt + shift 0 + '[' 1 -eq 1 ']' + path=/mnt/lustre/d31f ++ dirname /mnt/lustre/d31f + local parent=/mnt/lustre + '[' '' == -p ']' + [[ -n '' ]] + '[' 2 -le 1 ']' + is_lustre /mnt/lustre ++ stat -f -c %T /mnt/lustre + '[' lustre = lustre ']' + local mdt_index + '[' 1 -eq -1 ']' + mdt_index=1 + '[' -z '' ']' + hash_type=fnv_1a_64 ++ version_code 2.8.0 +++ tr '[:punct:][a-zA-Z]' ' ' ++ eval set -- 2 8 0 +++ set -- 2 8 0 ++ echo -n 34078720 + (( 34553369 >= 34078720 )) + '[' 2 -eq -1 ']' + echo 'striped dir -i1 -c2 -H fnv_1a_64 /mnt/lustre/d31f' striped dir -i1 -c2 -H fnv_1a_64 /mnt/lustre/d31f + /home/green/git/lustre-release/lustre/utils/lfs mkdir -i1 -c2 -H fnv_1a_64 /mnt/lustre/d31f + /home/green/git/lustre-release/lustre/utils/lfs setstripe -S 1048576 -c 1 /mnt/lustre/d31f + cp /etc/hosts /mnt/lustre/d31f + ls -l /mnt/lustre/d31f total 1 -rw-r--r-- 1 root root 159 Apr 18 20:44 hosts + /home/green/git/lustre-release/lustre/utils/lfs getstripe /mnt/lustre/d31f/hosts /mnt/lustre/d31f/hosts lmm_stripe_count: 1 lmm_stripe_size: 1048576 lmm_pattern: raid0 lmm_layout_gen: 0 lmm_stripe_offset: 0 obdidx objid objid group 0 1117 0x45d 0x280000bd0 + multiop_bg_pause /mnt/lustre/d31f D_c + MULTIOP_PROG=multiop + FILE=/mnt/lustre/d31f + ARGS=D_c + TMPPIPE=/tmp/multiop_open_wait_pipe.7531 + mkfifo /tmp/multiop_open_wait_pipe.7531 + echo 'multiop /mnt/lustre/d31f vD_c' multiop /mnt/lustre/d31f vD_c + local pid=22197 + echo TMPPIPE=/tmp/multiop_open_wait_pipe.7531 + multiop /mnt/lustre/d31f vD_c TMPPIPE=/tmp/multiop_open_wait_pipe.7531 + read -t 60 multiop_output + '[' 0 -ne 0 ']' + rm -f /tmp/multiop_open_wait_pipe.7531 + '[' PAUSING '!=' PAUSING ']' + return 0 + MULTIPID=22197 + rm -rv /mnt/lustre/d31f removed '/mnt/lustre/d31f/hosts' removed directory: '/mnt/lustre/d31f' + test_mkdir /mnt/lustre/d31f + local path + local p_option + local hash_type + hash_name=("all_char" "fnv_1a_64" "crush") + local hash_name + local dirstripe_count=2 + local dirstripe_index=1 + local OPTIND=1 + local overstripe_count + local stripe_command=-c ++ version_code 2.15.0 +++ tr '[:punct:][a-zA-Z]' ' ' ++ eval set -- 2 15 0 +++ set -- 2 15 0 ++ echo -n 34537472 + (( 34553369 > 34537472 )) + hash_name+=("crush2") + getopts c:C:H:i:p opt + shift 0 + '[' 1 -eq 1 ']' + path=/mnt/lustre/d31f ++ dirname /mnt/lustre/d31f + local parent=/mnt/lustre + '[' '' == -p ']' + [[ -n '' ]] + '[' 2 -le 1 ']' + is_lustre /mnt/lustre ++ stat -f -c %T /mnt/lustre + '[' lustre = lustre ']' + local mdt_index + '[' 1 -eq -1 ']' + mdt_index=1 + '[' -z '' ']' + hash_type=crush ++ version_code 2.8.0 +++ tr '[:punct:][a-zA-Z]' ' ' ++ eval set -- 2 8 0 +++ set -- 2 8 0 ++ echo -n 34078720 + (( 34553369 >= 34078720 )) + '[' 2 -eq -1 ']' + echo 'striped dir -i1 -c2 -H crush /mnt/lustre/d31f' striped dir -i1 -c2 -H crush /mnt/lustre/d31f + /home/green/git/lustre-release/lustre/utils/lfs mkdir -i1 -c2 -H crush /mnt/lustre/d31f + /home/green/git/lustre-release/lustre/utils/lfs setstripe -S 1048576 -c 1 /mnt/lustre/d31f + cp /etc/hosts /mnt/lustre/d31f + ls -l /mnt/lustre/d31f total 1 -rw-r--r-- 1 root root 159 Apr 18 20:44 hosts + /home/green/git/lustre-release/lustre/utils/lfs getstripe /mnt/lustre/d31f/hosts /mnt/lustre/d31f/hosts lmm_stripe_count: 1 lmm_stripe_size: 1048576 lmm_pattern: raid0 lmm_layout_gen: 0 lmm_stripe_offset: 1 obdidx objid objid group 1 1075 0x433 0x2c0000402 + multiop_bg_pause /mnt/lustre/d31f D_c + MULTIOP_PROG=multiop + FILE=/mnt/lustre/d31f + ARGS=D_c + TMPPIPE=/tmp/multiop_open_wait_pipe.7531 + mkfifo /tmp/multiop_open_wait_pipe.7531 + echo 'multiop /mnt/lustre/d31f vD_c' multiop /mnt/lustre/d31f vD_c + local pid=22216 + multiop /mnt/lustre/d31f vD_c + echo TMPPIPE=/tmp/multiop_open_wait_pipe.7531 TMPPIPE=/tmp/multiop_open_wait_pipe.7531 + read -t 60 multiop_output + '[' 0 -ne 0 ']' + rm -f /tmp/multiop_open_wait_pipe.7531 + '[' PAUSING '!=' PAUSING ']' + return 0 + MULTIPID2=22216 + kill -USR1 22197 + wait 22197 + sleep 6 + kill -USR1 22216 + wait 22216 + set +vx PASS 31f (8s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 31g: cross directory link================== 20:44:52 (1713487492) -- cross directory link -- striped dir -i1 -c1 -H all_char /mnt/lustre/d31g.sanityga striped dir -i1 -c1 -H crush2 /mnt/lustre/d31g.sanitygb /mnt/lustre/d31g.sanityga/f has type file OK /mnt/lustre/d31g.sanitygb/g has type file OK PASS 31g (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 31h: cross directory link under child========================================================================= 20:44:56 (1713487496) -- cross directory link -- striped dir -i1 -c1 -H fnv_1a_64 /mnt/lustre/d31h.sanity striped dir -i1 -c1 -H crush2 /mnt/lustre/d31h.sanity/dir /mnt/lustre/d31h.sanity/f has type file OK /mnt/lustre/d31h.sanity/dir/g has type file OK PASS 31h (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 31i: cross directory link under parent========================================================================= 20:44:59 (1713487499) -- cross directory link -- striped dir -i1 -c1 -H fnv_1a_64 /mnt/lustre/d31i.sanity striped dir -i1 -c1 -H crush2 /mnt/lustre/d31i.sanity/dir /mnt/lustre/d31i.sanity/dir/f has type file OK /mnt/lustre/d31i.sanity/g has type file OK PASS 31i (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 31j: link for directory =================== 20:45:02 (1713487502) striped dir -i1 -c1 -H all_char /mnt/lustre/d31j.sanity striped dir -i1 -c1 -H crush /mnt/lustre/d31j.sanity/dir1 ln: '/mnt/lustre/d31j.sanity/dir1': hard link not allowed for directory link: cannot create link '/mnt/lustre/d31j.sanity/dir3' to '/mnt/lustre/d31j.sanity/dir1': Operation not permitted link: cannot create link '/mnt/lustre/d31j.sanity/dir1' to '/mnt/lustre/d31j.sanity/dir1': File exists PASS 31j (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 31k: link to file: the same, non-existing, dir ========================================================== 20:45:05 (1713487505) striped dir -i1 -c1 -H fnv_1a_64 /mnt/lustre/d31k.sanity link: cannot create link '/mnt/lustre/d31k.sanity/exist' to '/mnt/lustre/d31k.sanity/s': File exists link: cannot create link '/mnt/lustre/d31k.sanity/s' to '/mnt/lustre/d31k.sanity/s': File exists link: cannot create link '/mnt/lustre/d31k.sanity' to '/mnt/lustre/d31k.sanity/s': File exists link: cannot create link '/mnt/lustre/d31k.sanity/s' to '/mnt/lustre/d31k.sanity': File exists link: cannot create link '/mnt/lustre/d31k.sanity/foo' to '/mnt/lustre/d31k.sanity/not-exist': No such file or directory link: cannot create link '/mnt/lustre/d31k.sanity/s' to '/mnt/lustre/d31k.sanity/not-exist': No such file or directory PASS 31k (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 31l: link to file: target dir has trailing slash ========================================================== 20:45:08 (1713487508) PASS 31l (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 31m: link to file: the same, non-existing, dir ========================================================== 20:45:12 (1713487512) link: cannot create link '/mnt/lustre/d31m2/exist' to '/mnt/lustre/d31m/s': File exists link: cannot create link '/mnt/lustre/d31m2' to '/mnt/lustre/d31m/s': File exists link: cannot create link '/mnt/lustre/d31m/s' to '/mnt/lustre/d31m2': File exists link: cannot create link '/mnt/lustre/d31m2/foo' to '/mnt/lustre/d31m/not-exist': No such file or directory link: cannot create link '/mnt/lustre/d31m2/s' to '/mnt/lustre/d31m/not-exist': No such file or directory PASS 31m (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 31n: check link count of unlinked file ==== 20:45:15 (1713487515) PASS 31n (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 31o: duplicate hard links with same filename ========================================================== 20:45:18 (1713487518) striped dir -i1 -c2 -H all_char /mnt/lustre/d31o.sanity 28999: link /mnt/lustre/d31o.sanity/f31o.sanity.1_QcqVpy to /mnt/lustre/d31o.sanity/f31o.sanity.1 succeeded 29036: link /mnt/lustre/d31o.sanity/f31o.sanity.2_xIiOGj to /mnt/lustre/d31o.sanity/f31o.sanity.2 succeeded 29072: link /mnt/lustre/d31o.sanity/f31o.sanity.3_lHFwD3 to /mnt/lustre/d31o.sanity/f31o.sanity.3 succeeded 29109: link /mnt/lustre/d31o.sanity/f31o.sanity.4_E5iePO to /mnt/lustre/d31o.sanity/f31o.sanity.4 succeeded 29146: link /mnt/lustre/d31o.sanity/f31o.sanity.5_FKevzS to /mnt/lustre/d31o.sanity/f31o.sanity.5 succeeded 29184: link /mnt/lustre/d31o.sanity/f31o.sanity.6_J60p7u to /mnt/lustre/d31o.sanity/f31o.sanity.6 succeeded 29222: link /mnt/lustre/d31o.sanity/f31o.sanity.7_iSL9qW to /mnt/lustre/d31o.sanity/f31o.sanity.7 succeeded 29258: link /mnt/lustre/d31o.sanity/f31o.sanity.8_noLS1k to /mnt/lustre/d31o.sanity/f31o.sanity.8 succeeded 29297: link /mnt/lustre/d31o.sanity/f31o.sanity.9_xZBBMN to /mnt/lustre/d31o.sanity/f31o.sanity.9 succeeded 29333: link /mnt/lustre/d31o.sanity/f31o.sanity.10_K57grB to /mnt/lustre/d31o.sanity/f31o.sanity.10 succeeded 29369: link /mnt/lustre/d31o.sanity/f31o.sanity.11_hLn5hF to /mnt/lustre/d31o.sanity/f31o.sanity.11 succeeded 29407: link /mnt/lustre/d31o.sanity/f31o.sanity.12_fdEavv to /mnt/lustre/d31o.sanity/f31o.sanity.12 succeeded 29443: link /mnt/lustre/d31o.sanity/f31o.sanity.13_oPFv3m to /mnt/lustre/d31o.sanity/f31o.sanity.13 succeeded 29482: link /mnt/lustre/d31o.sanity/f31o.sanity.14_TMYNxs to /mnt/lustre/d31o.sanity/f31o.sanity.14 succeeded 29517: link /mnt/lustre/d31o.sanity/f31o.sanity.15_AMlzeE to /mnt/lustre/d31o.sanity/f31o.sanity.15 succeeded 29554: link /mnt/lustre/d31o.sanity/f31o.sanity.16_TcO5y3 to /mnt/lustre/d31o.sanity/f31o.sanity.16 succeeded 29592: link /mnt/lustre/d31o.sanity/f31o.sanity.17_TQqxX9 to /mnt/lustre/d31o.sanity/f31o.sanity.17 succeeded 29629: link /mnt/lustre/d31o.sanity/f31o.sanity.18_zViemC to /mnt/lustre/d31o.sanity/f31o.sanity.18 succeeded 29666: link /mnt/lustre/d31o.sanity/f31o.sanity.19_s7zGwn to /mnt/lustre/d31o.sanity/f31o.sanity.19 succeeded 29707: link /mnt/lustre/d31o.sanity/f31o.sanity.20_prPaYl to /mnt/lustre/d31o.sanity/f31o.sanity.20 succeeded 29740: link /mnt/lustre/d31o.sanity/f31o.sanity.21_5thC3J to /mnt/lustre/d31o.sanity/f31o.sanity.21 succeeded 29778: link /mnt/lustre/d31o.sanity/f31o.sanity.22_isWh5J to /mnt/lustre/d31o.sanity/f31o.sanity.22 succeeded 29815: link /mnt/lustre/d31o.sanity/f31o.sanity.23_bi42c4 to /mnt/lustre/d31o.sanity/f31o.sanity.23 succeeded 29851: link /mnt/lustre/d31o.sanity/f31o.sanity.24_osSpWP to /mnt/lustre/d31o.sanity/f31o.sanity.24 succeeded 29890: link /mnt/lustre/d31o.sanity/f31o.sanity.25_jVfpmr to /mnt/lustre/d31o.sanity/f31o.sanity.25 succeeded 29925: link /mnt/lustre/d31o.sanity/f31o.sanity.26_F7SdBI to /mnt/lustre/d31o.sanity/f31o.sanity.26 succeeded 29963: link /mnt/lustre/d31o.sanity/f31o.sanity.27_YKYXEu to /mnt/lustre/d31o.sanity/f31o.sanity.27 succeeded 30002: link /mnt/lustre/d31o.sanity/f31o.sanity.28_2ro5Cp to /mnt/lustre/d31o.sanity/f31o.sanity.28 succeeded 30037: link /mnt/lustre/d31o.sanity/f31o.sanity.29_1191mF to /mnt/lustre/d31o.sanity/f31o.sanity.29 succeeded 30078: link /mnt/lustre/d31o.sanity/f31o.sanity.30_JwTVzm to /mnt/lustre/d31o.sanity/f31o.sanity.30 succeeded 30113: link /mnt/lustre/d31o.sanity/f31o.sanity.31_ofGZG5 to /mnt/lustre/d31o.sanity/f31o.sanity.31 succeeded 30150: link /mnt/lustre/d31o.sanity/f31o.sanity.32_2CKuOY to /mnt/lustre/d31o.sanity/f31o.sanity.32 succeeded 30189: link /mnt/lustre/d31o.sanity/f31o.sanity.33_6mtXOL to /mnt/lustre/d31o.sanity/f31o.sanity.33 succeeded 30225: link /mnt/lustre/d31o.sanity/f31o.sanity.34_jFTHBo to /mnt/lustre/d31o.sanity/f31o.sanity.34 succeeded 30262: link /mnt/lustre/d31o.sanity/f31o.sanity.35_PqK6X7 to /mnt/lustre/d31o.sanity/f31o.sanity.35 succeeded 30300: link /mnt/lustre/d31o.sanity/f31o.sanity.36_DIjBvk to /mnt/lustre/d31o.sanity/f31o.sanity.36 succeeded 30339: link /mnt/lustre/d31o.sanity/f31o.sanity.37_6c7OER to /mnt/lustre/d31o.sanity/f31o.sanity.37 succeeded 30376: link /mnt/lustre/d31o.sanity/f31o.sanity.38_M4Te41 to /mnt/lustre/d31o.sanity/f31o.sanity.38 succeeded 30412: link /mnt/lustre/d31o.sanity/f31o.sanity.39_kgJ5Vd to /mnt/lustre/d31o.sanity/f31o.sanity.39 succeeded 30448: link /mnt/lustre/d31o.sanity/f31o.sanity.40_giT7tW to /mnt/lustre/d31o.sanity/f31o.sanity.40 succeeded 30485: link /mnt/lustre/d31o.sanity/f31o.sanity.41_evgH2S to /mnt/lustre/d31o.sanity/f31o.sanity.41 succeeded 30522: link /mnt/lustre/d31o.sanity/f31o.sanity.42_ihPy7Q to /mnt/lustre/d31o.sanity/f31o.sanity.42 succeeded 30559: link /mnt/lustre/d31o.sanity/f31o.sanity.43_kIlO1Z to /mnt/lustre/d31o.sanity/f31o.sanity.43 succeeded 30597: link /mnt/lustre/d31o.sanity/f31o.sanity.44_XAK7OU to /mnt/lustre/d31o.sanity/f31o.sanity.44 succeeded 30636: link /mnt/lustre/d31o.sanity/f31o.sanity.45_k9Apiw to /mnt/lustre/d31o.sanity/f31o.sanity.45 succeeded 30673: link /mnt/lustre/d31o.sanity/f31o.sanity.46_IskRTW to /mnt/lustre/d31o.sanity/f31o.sanity.46 succeeded 30710: link /mnt/lustre/d31o.sanity/f31o.sanity.47_jwAzmv to /mnt/lustre/d31o.sanity/f31o.sanity.47 succeeded 30746: link /mnt/lustre/d31o.sanity/f31o.sanity.48_Cixv80 to /mnt/lustre/d31o.sanity/f31o.sanity.48 succeeded 30782: link /mnt/lustre/d31o.sanity/f31o.sanity.49_hzxCoK to /mnt/lustre/d31o.sanity/f31o.sanity.49 succeeded 30819: link /mnt/lustre/d31o.sanity/f31o.sanity.50_CV6RG6 to /mnt/lustre/d31o.sanity/f31o.sanity.50 succeeded 30856: link /mnt/lustre/d31o.sanity/f31o.sanity.51_5eAVGQ to /mnt/lustre/d31o.sanity/f31o.sanity.51 succeeded 30895: link /mnt/lustre/d31o.sanity/f31o.sanity.52_dTRX6k to /mnt/lustre/d31o.sanity/f31o.sanity.52 succeeded 30931: link /mnt/lustre/d31o.sanity/f31o.sanity.53_7grkQq to /mnt/lustre/d31o.sanity/f31o.sanity.53 succeeded 30969: link /mnt/lustre/d31o.sanity/f31o.sanity.54_a95rPB to /mnt/lustre/d31o.sanity/f31o.sanity.54 succeeded 31005: link /mnt/lustre/d31o.sanity/f31o.sanity.55_u2MvO9 to /mnt/lustre/d31o.sanity/f31o.sanity.55 succeeded 31043: link /mnt/lustre/d31o.sanity/f31o.sanity.56_61fUoj to /mnt/lustre/d31o.sanity/f31o.sanity.56 succeeded 31081: link /mnt/lustre/d31o.sanity/f31o.sanity.57_LqIeoe to /mnt/lustre/d31o.sanity/f31o.sanity.57 succeeded 31117: link /mnt/lustre/d31o.sanity/f31o.sanity.58_mv1kU0 to /mnt/lustre/d31o.sanity/f31o.sanity.58 succeeded 31153: link /mnt/lustre/d31o.sanity/f31o.sanity.59_AfYVxU to /mnt/lustre/d31o.sanity/f31o.sanity.59 succeeded 31190: link /mnt/lustre/d31o.sanity/f31o.sanity.60_us0YYJ to /mnt/lustre/d31o.sanity/f31o.sanity.60 succeeded 31229: link /mnt/lustre/d31o.sanity/f31o.sanity.61_tm55so to /mnt/lustre/d31o.sanity/f31o.sanity.61 succeeded 31264: link /mnt/lustre/d31o.sanity/f31o.sanity.62_uOOpDj to /mnt/lustre/d31o.sanity/f31o.sanity.62 succeeded 31302: link /mnt/lustre/d31o.sanity/f31o.sanity.63_FeyWJ6 to /mnt/lustre/d31o.sanity/f31o.sanity.63 succeeded 31339: link /mnt/lustre/d31o.sanity/f31o.sanity.64_QIV93J to /mnt/lustre/d31o.sanity/f31o.sanity.64 succeeded 31376: link /mnt/lustre/d31o.sanity/f31o.sanity.65_rZnqMD to /mnt/lustre/d31o.sanity/f31o.sanity.65 succeeded 31413: link /mnt/lustre/d31o.sanity/f31o.sanity.66_IGeZVQ to /mnt/lustre/d31o.sanity/f31o.sanity.66 succeeded 31450: link /mnt/lustre/d31o.sanity/f31o.sanity.67_QwNjNp to /mnt/lustre/d31o.sanity/f31o.sanity.67 succeeded 31487: link /mnt/lustre/d31o.sanity/f31o.sanity.68_QKVYms to /mnt/lustre/d31o.sanity/f31o.sanity.68 succeeded 31524: link /mnt/lustre/d31o.sanity/f31o.sanity.69_OIqzMd to /mnt/lustre/d31o.sanity/f31o.sanity.69 succeeded 31562: link /mnt/lustre/d31o.sanity/f31o.sanity.70_GaoCco to /mnt/lustre/d31o.sanity/f31o.sanity.70 succeeded 31599: link /mnt/lustre/d31o.sanity/f31o.sanity.71_t1A4db to /mnt/lustre/d31o.sanity/f31o.sanity.71 succeeded 31636: link /mnt/lustre/d31o.sanity/f31o.sanity.72_AGhqkv to /mnt/lustre/d31o.sanity/f31o.sanity.72 succeeded 31673: link /mnt/lustre/d31o.sanity/f31o.sanity.73_2UyxTS to /mnt/lustre/d31o.sanity/f31o.sanity.73 succeeded 31710: link /mnt/lustre/d31o.sanity/f31o.sanity.74_YpM10R to /mnt/lustre/d31o.sanity/f31o.sanity.74 succeeded 31748: link /mnt/lustre/d31o.sanity/f31o.sanity.75_tpLbDJ to /mnt/lustre/d31o.sanity/f31o.sanity.75 succeeded 31784: link /mnt/lustre/d31o.sanity/f31o.sanity.76_5ZULyx to /mnt/lustre/d31o.sanity/f31o.sanity.76 succeeded 31821: link /mnt/lustre/d31o.sanity/f31o.sanity.77_GRrJKZ to /mnt/lustre/d31o.sanity/f31o.sanity.77 succeeded 31858: link /mnt/lustre/d31o.sanity/f31o.sanity.78_MLpJsF to /mnt/lustre/d31o.sanity/f31o.sanity.78 succeeded 31895: link /mnt/lustre/d31o.sanity/f31o.sanity.79_r2ZAoK to /mnt/lustre/d31o.sanity/f31o.sanity.79 succeeded 31932: link /mnt/lustre/d31o.sanity/f31o.sanity.80_d0dZjl to /mnt/lustre/d31o.sanity/f31o.sanity.80 succeeded 31970: link /mnt/lustre/d31o.sanity/f31o.sanity.81_DMjYJU to /mnt/lustre/d31o.sanity/f31o.sanity.81 succeeded 32007: link /mnt/lustre/d31o.sanity/f31o.sanity.82_iopCOX to /mnt/lustre/d31o.sanity/f31o.sanity.82 succeeded 32045: link /mnt/lustre/d31o.sanity/f31o.sanity.83_3exrwA to /mnt/lustre/d31o.sanity/f31o.sanity.83 succeeded 32083: link /mnt/lustre/d31o.sanity/f31o.sanity.84_uuMakR to /mnt/lustre/d31o.sanity/f31o.sanity.84 succeeded 32118: link /mnt/lustre/d31o.sanity/f31o.sanity.85_98l2Qk to /mnt/lustre/d31o.sanity/f31o.sanity.85 succeeded 32155: link /mnt/lustre/d31o.sanity/f31o.sanity.86_XpCH1c to /mnt/lustre/d31o.sanity/f31o.sanity.86 succeeded 32192: link /mnt/lustre/d31o.sanity/f31o.sanity.87_rWUV72 to /mnt/lustre/d31o.sanity/f31o.sanity.87 succeeded 32229: link /mnt/lustre/d31o.sanity/f31o.sanity.88_qlhRU7 to /mnt/lustre/d31o.sanity/f31o.sanity.88 succeeded 32268: link /mnt/lustre/d31o.sanity/f31o.sanity.89_pWGsbG to /mnt/lustre/d31o.sanity/f31o.sanity.89 succeeded 32304: link /mnt/lustre/d31o.sanity/f31o.sanity.90_A2yfXR to /mnt/lustre/d31o.sanity/f31o.sanity.90 succeeded 32341: link /mnt/lustre/d31o.sanity/f31o.sanity.91_wuU0rc to /mnt/lustre/d31o.sanity/f31o.sanity.91 succeeded 32379: link /mnt/lustre/d31o.sanity/f31o.sanity.92_zePViE to /mnt/lustre/d31o.sanity/f31o.sanity.92 succeeded 32415: link /mnt/lustre/d31o.sanity/f31o.sanity.93_Q0cBj6 to /mnt/lustre/d31o.sanity/f31o.sanity.93 succeeded 32454: link /mnt/lustre/d31o.sanity/f31o.sanity.94_m08Nrb to /mnt/lustre/d31o.sanity/f31o.sanity.94 succeeded 32489: link /mnt/lustre/d31o.sanity/f31o.sanity.95_Ju8Q5W to /mnt/lustre/d31o.sanity/f31o.sanity.95 succeeded 32526: link /mnt/lustre/d31o.sanity/f31o.sanity.96_7x3WJQ to /mnt/lustre/d31o.sanity/f31o.sanity.96 succeeded 32565: link /mnt/lustre/d31o.sanity/f31o.sanity.97_wQmAvO to /mnt/lustre/d31o.sanity/f31o.sanity.97 succeeded 32603: link /mnt/lustre/d31o.sanity/f31o.sanity.98_wausOr to /mnt/lustre/d31o.sanity/f31o.sanity.98 succeeded 32638: link /mnt/lustre/d31o.sanity/f31o.sanity.99_a7t2zI to /mnt/lustre/d31o.sanity/f31o.sanity.99 succeeded 32676: link /mnt/lustre/d31o.sanity/f31o.sanity.100_L6uprL to /mnt/lustre/d31o.sanity/f31o.sanity.100 succeeded PASS 31o (15s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 31p: remove of open striped directory ===== 20:45:35 (1713487535) striped dir -i1 -c2 -H crush2 /mnt/lustre/d31p.sanity creating directory /mnt/lustre/d31p.sanity/striped_dir/test1 opening directory unlinking /mnt/lustre/d31p.sanity/striped_dir/test1 Ok, everything goes well. creating directory /mnt/lustre/d31p.sanity/striped_dir/test2 opening directory unlinking /mnt/lustre/d31p.sanity/striped_dir/test2 Ok, everything goes well. /mnt/lustre/d31p.sanity/striped_dir/test1: absent OK /mnt/lustre/d31p.sanity/striped_dir/test2: absent OK PASS 31p (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 31q: create striped directory on specific MDTs ========================================================== 20:45:38 (1713487538) SKIP: sanity test_31q needs >= 3 MDTs SKIP 31q (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 31r: open-rename(replace) race ============ 20:45:41 (1713487541) fail_loc=0x1419 fail_val=3 PASS 31r (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 32a: stat d32a/ext2-mountpoint/.. =============================================================================== 20:45:45 (1713487545) == more mountpoints and symlinks ================= striped dir -i0 -c2 -H crush /mnt/lustre/d32a.sanity/ext2-mountpoint /mnt/lustre/d32a.sanity/ext2-mountpoint/.. has type dir OK losetup: /dev/loop0: detach failed: No such device or address PASS 32a (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 32b: open d32b/ext2-mountpoint/.. =============================================================================== 20:45:48 (1713487548) striped dir -i0 -c2 -H crush2 /mnt/lustre/d32b.sanity/ext2-mountpoint total 17 drwxr-xr-x 3 root root 4096 Apr 18 20:45 . drwxr-xr-x 132 root root 12288 Apr 18 20:45 .. drwxr-xr-x 3 root root 1024 Apr 18 20:16 ext2-mountpoint losetup: /dev/loop0: detach failed: No such device or address PASS 32b (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 32c: stat d32c/ext2-mountpoint/../d2/test_dir =================================================================== 20:45:51 (1713487551) striped dir -i0 -c2 -H crush2 /mnt/lustre/d32c.sanity/ext2-mountpoint striped dir -i0 -c2 -H crush /mnt/lustre/d32c.sanity/d2/test_dir /mnt/lustre/d32c.sanity/ext2-mountpoint/../d2/test_dir has type dir OK losetup: /dev/loop0: detach failed: No such device or address PASS 32c (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 32d: open d32d/ext2-mountpoint/../d2/test_dir ========================================================== 20:45:55 (1713487555) striped dir -i0 -c2 -H crush2 /mnt/lustre/d32d.sanity/ext2-mountpoint striped dir -i0 -c2 -H crush2 /mnt/lustre/d32d.sanity/d2/test_dir total 12 drwxr-xr-x 2 root root 8192 Apr 18 20:45 . drwxr-xr-x 3 root root 4096 Apr 18 20:45 .. losetup: /dev/loop0: detach failed: No such device or address PASS 32d (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 32e: stat d32e/symlink->tmp/symlink->lustre-subdir ========================================================== 20:45:59 (1713487559) striped dir -i0 -c2 -H crush /mnt/lustre/d32e.sanity/tmp /mnt/lustre/d32e.sanity/tmp/symlink11 has type link OK /mnt/lustre/d32e.sanity/symlink01 has type link OK PASS 32e (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 32f: open d32f/symlink->tmp/symlink->lustre-subdir ========================================================== 20:46:02 (1713487562) striped dir -i0 -c2 -H fnv_1a_64 /mnt/lustre/d32f.sanity/tmp symlink01 tmp symlink01 tmp PASS 32f (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 32g: stat d32g/symlink->tmp/symlink->lustre-subdir/2 ========================================================== 20:46:05 (1713487565) striped dir -i0 -c2 -H crush2 /mnt/lustre/d32g.sanity/tmp striped dir -i0 -c2 -H crush2 /mnt/lustre/d32g.sanity2 /mnt/lustre/d32g.sanity/tmp/symlink12 has type link OK /mnt/lustre/d32g.sanity/symlink02 has type link OK /mnt/lustre/d32g.sanity/tmp/symlink12 has type dir OK /mnt/lustre/d32g.sanity/symlink02 has type dir OK PASS 32g (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 32h: open d32h/symlink->tmp/symlink->lustre-subdir/2 ========================================================== 20:46:08 (1713487568) striped dir -i0 -c2 -H crush /mnt/lustre/d32h.sanity/tmp striped dir -i0 -c2 -H all_char /mnt/lustre/d32h.sanity2 PASS 32h (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 32i: stat d32i/ext2-mountpoint/../test_file ===================================================================== 20:46:11 (1713487571) striped dir -i0 -c2 -H all_char /mnt/lustre/d32i.sanity/ext2-mountpoint /mnt/lustre/d32i.sanity/ext2-mountpoint/../test_file has type file OK losetup: /dev/loop0: detach failed: No such device or address PASS 32i (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 32j: open d32j/ext2-mountpoint/../test_file ===================================================================== 20:46:15 (1713487575) striped dir -i0 -c2 -H crush2 /mnt/lustre/d32j.sanity/ext2-mountpoint losetup: /dev/loop0: detach failed: No such device or address PASS 32j (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 32k: stat d32k/ext2-mountpoint/../d2/test_file ================================================================== 20:46:18 (1713487578) striped dir -i0 -c2 -H crush /mnt/lustre/d32k.sanity/ext2-mountpoint striped dir -i0 -c2 -H crush /mnt/lustre/d32k.sanity/d2 /mnt/lustre/d32k.sanity/ext2-mountpoint/../d2/test_file has type file OK losetup: /dev/loop0: detach failed: No such device or address PASS 32k (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 32l: open d32l/ext2-mountpoint/../d2/test_file ================================================================== 20:46:22 (1713487582) striped dir -i0 -c2 -H all_char /mnt/lustre/d32l.sanity/ext2-mountpoint striped dir -i0 -c2 -H fnv_1a_64 /mnt/lustre/d32l.sanity/d2 losetup: /dev/loop0: detach failed: No such device or address PASS 32l (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 32m: stat d32m/symlink->tmp/symlink->lustre-root ================================================================ 20:46:25 (1713487585) striped dir -i0 -c2 -H crush2 /mnt/lustre/d32m/tmp /mnt/lustre/d32m/tmp/symlink11 has type link OK /mnt/lustre/d32m/symlink01 has type link OK PASS 32m (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 32n: open d32n/symlink->tmp/symlink->lustre-root ================================================================ 20:46:28 (1713487588) striped dir -i0 -c2 -H all_char /mnt/lustre/d32n/tmp lrwxrwxrwx 1 root root 11 Apr 18 20:46 /mnt/lustre/d32n/tmp/symlink11 -> /mnt/lustre lrwxrwxrwx 1 root root 30 Apr 18 20:46 /mnt/lustre/d32n/symlink01 -> /mnt/lustre/d32n/tmp/symlink11 PASS 32n (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 32o: stat d32o/symlink->tmp/symlink->lustre-root/ ========================================================== 20:46:31 (1713487591) striped dir -i0 -c2 -H fnv_1a_64 /mnt/lustre/d32o/tmp /mnt/lustre/d32o/tmp/symlink12 has type link OK /mnt/lustre/d32o/symlink02 has type link OK /mnt/lustre/d32o/tmp/symlink12 has type file OK /mnt/lustre/d32o/symlink02 has type file OK PASS 32o (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 32p: open d32p/symlink->tmp/symlink->lustre-root/ ========================================================== 20:46:34 (1713487594) 32p_1 32p_2 32p_3 32p_4 striped dir -i0 -c2 -H fnv_1a_64 /mnt/lustre/d32p/tmp 32p_5 32p_6 32p_7 32p_8 32p_9 32p_10 PASS 32p (6s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 32q: stat follows mountpoints in Lustre (should return error) ========================================================== 20:46:41 (1713487601) striped dir -i0 -c2 -H fnv_1a_64 /mnt/lustre/d32q.sanity/ext2-mountpoint losetup: /dev/loop0: detach failed: No such device or address PASS 32q (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 32r: opendir follows mountpoints in Lustre (should return error) ========================================================== 20:46:45 (1713487605) striped dir -i0 -c2 -H all_char /mnt/lustre/d32r.sanity/ext2-mountpoint losetup: /dev/loop0: detach failed: No such device or address PASS 32r (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 33aa: write file with mode 444 (should return error) ========================================================== 20:46:48 (1713487608) 33_1 running as uid/gid/euid/egid 500/500/500/500, groups: [openfile] [-f] [O_RDWR] [/mnt/lustre/f33aa.sanity] Error in opening file "/mnt/lustre/f33aa.sanity"(flags=O_RDWR) 13: Permission denied 33_2 PASS 33aa (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 33a: test open file(mode=0444) with O_RDWR (should return error) ========================================================== 20:46:52 (1713487612) striped dir -i1 -c2 -H crush /mnt/lustre/d33a.sanity running as uid/gid/euid/egid 500/500/500/500, groups: [openfile] [-f] [O_RDWR:O_CREAT] [-m] [0444] [/mnt/lustre/d33a.sanity/f33a.sanity] Succeed in opening file "/mnt/lustre/d33a.sanity/f33a.sanity"(flags=O_RDWR, mode=444) running as uid/gid/euid/egid 500/500/500/500, groups: [openfile] [-f] [O_RDWR:O_CREAT] [-m] [0444] [/mnt/lustre/d33a.sanity/f33a.sanity] Error in opening file "/mnt/lustre/d33a.sanity/f33a.sanity"(flags=O_RDWR, mode=444) 13: Permission denied PASS 33a (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 33b: test open file with malformed flags (No panic) ========================================================== 20:46:55 (1713487615) striped dir -i1 -c2 -H crush2 /mnt/lustre/d33b.sanity running as uid/gid/euid/egid 500/500/500/500, groups: [openfile] [-f] [1286739555] [/mnt/lustre/d33b.sanity/f33b.sanity] Error in opening file "/mnt/lustre/d33b.sanity/f33b.sanity"(flags=1286739555) 2: No such file or directory PASS 33b (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 33c: test write_bytes stats =============== 20:46:58 (1713487618) striped dir -i1 -c2 -H all_char /mnt/lustre/d33c.sanity baseline_write_bytes@ost1/lustre-OST0000=675441533 PASS 33c (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 33d: openfile with 444 modes and malformed flags under remote dir ========================================================== 20:47:01 (1713487621) striped dir -i1 -c2 -H crush /mnt/lustre/d33d.sanity running as uid/gid/euid/egid 500/500/500/500, groups: [openfile] [-f] [O_RDWR] [/mnt/lustre/f33d.sanity] Error in opening file "/mnt/lustre/f33d.sanity"(flags=O_RDWR) 2: No such file or directory running as uid/gid/euid/egid 500/500/500/500, groups: [openfile] [-f] [O_RDWR:O_CREAT] [-m] [0444] [/mnt/lustre/d33d.sanity/remote_dir/f33] Succeed in opening file "/mnt/lustre/d33d.sanity/remote_dir/f33"(flags=O_RDWR, mode=444) running as uid/gid/euid/egid 500/500/500/500, groups: [openfile] [-f] [O_RDWR:O_CREAT] [-m] [0444] [/mnt/lustre/d33d.sanity/remote_dir/f33] Error in opening file "/mnt/lustre/d33d.sanity/remote_dir/f33"(flags=O_RDWR, mode=444) 13: Permission denied running as uid/gid/euid/egid 500/500/500/500, groups: [openfile] [-f] [1286739555] [/mnt/lustre/d33d.sanity/remote_dir/f33] Succeed in opening file "/mnt/lustre/d33d.sanity/remote_dir/f33"(flags=1286739555) PASS 33d (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 33e: mkdir and striped directory should have same mode ========================================================== 20:47:05 (1713487625) PASS 33e (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 33f: nonroot user can create, access, and remove a striped directory ========================================================== 20:47:08 (1713487628) mdt.lustre-MDT0000.enable_remote_dir_gid=-1 mdt.lustre-MDT0001.enable_remote_dir_gid=-1 running as uid/gid/euid/egid 500/500/500/500, groups: [lfs] [mkdir] [-i] [0] [-c2] [/mnt/lustre/d33f.sanity/striped_dir] running as uid/gid/euid/egid 500/500/500/500, groups: [touch] [/mnt/lustre/d33f.sanity/striped_dir/0] [/mnt/lustre/d33f.sanity/striped_dir/1] [/mnt/lustre/d33f.sanity/striped_dir/2] [/mnt/lustre/d33f.sanity/striped_dir/3] [/mnt/lustre/d33f.sanity/striped_dir/4] [/mnt/lustre/d33f.sanity/striped_dir/5] [/mnt/lustre/d33f.sanity/striped_dir/6] [/mnt/lustre/d33f.sanity/striped_dir/7] [/mnt/lustre/d33f.sanity/striped_dir/8] [/mnt/lustre/d33f.sanity/striped_dir/9] [/mnt/lustre/d33f.sanity/striped_dir/10] [/mnt/lustre/d33f.sanity/striped_dir/11] [/mnt/lustre/d33f.sanity/striped_dir/12] [/mnt/lustre/d33f.sanity/striped_dir/13] [/mnt/lustre/d33f.sanity/striped_dir/14] [/mnt/lustre/d33f.sanity/striped_dir/15] [/mnt/lustre/d33f.sanity/striped_dir/16] running as uid/gid/euid/egid 500/500/500/500, groups: [rm] [/mnt/lustre/d33f.sanity/striped_dir/0] [/mnt/lustre/d33f.sanity/striped_dir/1] [/mnt/lustre/d33f.sanity/striped_dir/2] [/mnt/lustre/d33f.sanity/striped_dir/3] [/mnt/lustre/d33f.sanity/striped_dir/4] [/mnt/lustre/d33f.sanity/striped_dir/5] [/mnt/lustre/d33f.sanity/striped_dir/6] [/mnt/lustre/d33f.sanity/striped_dir/7] [/mnt/lustre/d33f.sanity/striped_dir/8] [/mnt/lustre/d33f.sanity/striped_dir/9] [/mnt/lustre/d33f.sanity/striped_dir/10] [/mnt/lustre/d33f.sanity/striped_dir/11] [/mnt/lustre/d33f.sanity/striped_dir/12] [/mnt/lustre/d33f.sanity/striped_dir/13] [/mnt/lustre/d33f.sanity/striped_dir/14] [/mnt/lustre/d33f.sanity/striped_dir/15] [/mnt/lustre/d33f.sanity/striped_dir/16] running as uid/gid/euid/egid 500/500/500/500, groups: [rmdir] [/mnt/lustre/d33f.sanity/striped_dir] mdt.lustre-MDT0000.enable_remote_dir_gid=0 mdt.lustre-MDT0001.enable_remote_dir_gid=0 PASS 33f (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 33g: nonroot user create already existing root created file ========================================================== 20:47:12 (1713487632) running as uid/gid/euid/egid 500/500/500/500, groups: [mkdir] [/mnt/lustre/d33g.sanity/dir2] mkdir: cannot create directory '/mnt/lustre/d33g.sanity/dir2': File exists PASS 33g (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 33h: temp file is located on the same MDT as target (crush) ========================================================== 20:47:15 (1713487635) striped dir -i1 -c2 -H crush /mnt/lustre/d33h.sanity pattern .f33h.sanity.XXXXXX pattern f33h.sanity.XXXXXXXX 0/250 MDT index mismatches, expect ~2-4 pattern .f33h.sanity.XXXXXX pattern f33h.sanity.XXXXXXXX 250/250 matches, expect ~250 for crush pattern=.f33h.sanity....XXX pattern=f33h.sanity....XXXXX 250/250 matches, expect ~250 for crush PASS 33h (30s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 33hh: temp file is located on the same MDT as target (crush2) ========================================================== 20:47:47 (1713487667) MDS1_VERSION=34553369 version_code=34537472 striped dir -i1 -c2 -H crush2 /mnt/lustre/d33hh.sanity pattern .f33hh.sanity.XXXXXX /mnt/lustre/d33hh.sanity/.f33hh.sanity.DERIXJ MDT index mismatch 0 != 1 /mnt/lustre/d33hh.sanity/.f33hh.sanity.YWOSAJ MDT index mismatch 0 != 1 pattern f33hh.sanity.XXXXXXXX 2/250 MDT index mismatches, expect ~2-4 pattern .f33hh.sanity.XXXXXX pattern f33hh.sanity.XXXXXXXX 120/250 matches, expect ~125 for crush2 pattern=.f33hh.sanity....XXX pattern=f33hh.sanity....XXXXX 121/250 matches, expect ~125 for crush2 PASS 33hh (31s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 33i: striped directory can be accessed when one MDT is down ========================================================== 20:48:20 (1713487700) striped dir -i0 -c2 -H crush2 /mnt/lustre/d33i.sanity total: 1000 open/close in 1.93 seconds: 519.19 ops/second ls: closing directory /mnt/lustre/d33i.sanity: Cannot send after transport endpoint shutdown ls: closing directory /mnt/lustre/d33i.sanity: Cannot send after transport endpoint shutdown PASS 33i (9s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 33j: lfs setdirstripe -D -i x,y,x should fail ========================================================== 20:48:31 (1713487711) lfs setdirstripe: trying to create unrecommended default striped directory layout, '-D -i x,y,z' will stripe every new directory across all MDTs, add -c with the number of MDTs to do this anyway Create striped directory on specified MDT, same as mkdir. May be restricted to root or group users, depending on settings. usage: setdirstripe [OPTION] [--mdt-count|-c stripe_count> [--help|-h] [--mdt-hash|-H mdt_hash] [--mdt-index|-i mdt_index[,mdt_index,...] [--mdt-overcount|-C stripe_count> [--default|-D] [--mode|-o mode] [--max-inherit|-X max_inherit] [--max-inherit-rr max_inherit_rr] To create dir with a foreign (free format) layout : setdirstripe|mkdir --foreign[=FOREIGN_TYPE] -x|-xattr STRING [--mode|-o MODE] [--flags HEX] DIRECTORY error: setdirstripe: stripe count 1 doesn't match the number of MDTs: 2 Create striped directory on specified MDT, same as mkdir. May be restricted to root or group users, depending on settings. usage: setdirstripe [OPTION] [--mdt-count|-c stripe_count> [--help|-h] [--mdt-hash|-H mdt_hash] [--mdt-index|-i mdt_index[,mdt_index,...] [--mdt-overcount|-C stripe_count> [--default|-D] [--mode|-o mode] [--max-inherit|-X max_inherit] [--max-inherit-rr max_inherit_rr] To create dir with a foreign (free format) layout : setdirstripe|mkdir --foreign[=FOREIGN_TYPE] -x|-xattr STRING [--mode|-o MODE] [--flags HEX] DIRECTORY error: setdirstripe: stripe count 3 doesn't match the number of MDTs: 2 Create striped directory on specified MDT, same as mkdir. May be restricted to root or group users, depending on settings. usage: setdirstripe [OPTION] [--mdt-count|-c stripe_count> [--help|-h] [--mdt-hash|-H mdt_hash] [--mdt-index|-i mdt_index[,mdt_index,...] [--mdt-overcount|-C stripe_count> [--default|-D] [--mode|-o mode] [--max-inherit|-X max_inherit] [--max-inherit-rr max_inherit_rr] To create dir with a foreign (free format) layout : setdirstripe|mkdir --foreign[=FOREIGN_TYPE] -x|-xattr STRING [--mode|-o MODE] [--flags HEX] DIRECTORY PASS 33j (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 34a: truncate file that has not been opened ===================================================================== 20:48:34 (1713487714) /mnt/lustre/f34 has size 2000000000000 OK PASS 34a (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 34b: O_RDONLY opening file doesn't create objects =============================================================== 20:48:37 (1713487717) /mnt/lustre/f34 has size 2000000000000 OK Succeed in opening file "/mnt/lustre/f34"(flags=O_RDONLY) /mnt/lustre/f34 has size 2000000000000 OK PASS 34b (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 34c: O_RDWR opening file-with-size works ======================================================================== 20:48:40 (1713487720) /mnt/lustre/f34 has size 2000000000000 OK Succeed in opening file "/mnt/lustre/f34"(flags=O_RDWR) /mnt/lustre/f34 has size 2000000000000 OK PASS 34c (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 34d: write to sparse file ======================================================================================= 20:48:44 (1713487724) 1+0 records in 1+0 records out 4096 bytes (4.1 kB) copied, 0.00590395 s, 694 kB/s /mnt/lustre/f34 has size 2000000000000 OK PASS 34d (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 34e: create objects, some with size and some without ============================================================ 20:48:47 (1713487727) /mnt/lustre/f34e has size 1000 OK Succeed in opening file "/mnt/lustre/f34e"(flags=O_RDWR) /mnt/lustre/f34e has size 1000 OK PASS 34e (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 34f: read from a file with no objects until EOF ================================================================= 20:48:50 (1713487730) 93+1 records in 93+1 records out 48000 bytes (48 kB) copied, 0.00908006 s, 5.3 MB/s /tmp/f34f has size 48000 OK 1+0 records in 1+0 records out 48000 bytes (48 kB) copied, 0.000256287 s, 187 MB/s PASS 34f (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 34g: truncate long file ========================================================================================= 20:48:53 (1713487733) 100+0 records in 100+0 records out 100 bytes (100 B) copied, 0.00870847 s, 11.5 kB/s /mnt/lustre/f34g.sanity has size 1000000000000 OK /mnt/lustre/f34g.sanity has size 1000000000000 OK /mnt/lustre/f34g.sanity has size 2000000000000 OK /mnt/lustre/f34g.sanity has size 2000000000000 OK PASS 34g (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 34h: ftruncate file under grouplock should not block ========================================================== 20:48:56 (1713487736) 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 0.362203 s, 28.9 MB/s 1+0 records in 1+0 records out 4096 bytes (4.1 kB) copied, 0.0077108 s, 531 kB/s PASS 34h (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 35a: exec file with mode 444 (should return and not leak) ========================================================== 20:49:02 (1713487742) running as uid/gid/euid/egid 500/500/500/500, groups: [/mnt/lustre/f35a] execvp fails running /mnt/lustre/f35a (13): Permission denied PASS 35a (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 36a: MDS utime check (mknod, utime) ======= 20:49:05 (1713487745) utime: good mknod times 1713487744 <= 1713487745 <= 1713487745 for /mnt/lustre/f36 utime: good utime mtimes 100000, atime 200000 PASS 36a (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 36b: OST utime check (open, utime) ======== 20:49:08 (1713487748) utime: good utime mtimes 100000, atime 200000 PASS 36b (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 36c: non-root MDS utime check (mknod, utime) ========================================================== 20:49:11 (1713487751) striped dir -i0 -c2 -H crush2 /mnt/lustre/d36 running as uid/gid/euid/egid 500/500/500/500, groups: [utime] [/mnt/lustre/d36/f36] utime: good mknod times 1713487750 <= 1713487751 <= 1713487751 for /mnt/lustre/d36/f36 utime: good utime mtimes 100000, atime 200000 PASS 36c (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 36d: non-root OST utime check (open, utime) ========================================================== 20:49:14 (1713487754) running as uid/gid/euid/egid 500/500/500/500, groups: [utime] [/mnt/lustre/d36/f36] utime: good utime mtimes 100000, atime 200000 PASS 36d (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 36e: utime on non-owned file (should return error) ========================================================== 20:49:17 (1713487757) striped dir -i0 -c2 -H all_char /mnt/lustre/d36e.sanity running as uid/gid/euid/egid 500/500/500/500, groups: [utime] [/mnt/lustre/d36e.sanity/f36e.sanity] utime: utime(/mnt/lustre/d36e.sanity/f36e.sanity) failed: rc 1: Operation not permitted PASS 36e (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 36f: utime on file racing with OST BRW write ==================================================================== 20:49:21 (1713487761) striped dir -i0 -c2 -H all_char /mnt/lustre/d36f.sanity fail_loc=0x80000214 Thu Apr 18 20:49:21 EDT 2024 1713487761 Thu Apr 18 20:49:22 EDT 2024 1713487762 PASS 36f (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 36g: FMD cache expiry =============================================================================== 20:49:25 (1713487765) striped dir -i0 -c2 -H crush /mnt/lustre/d36g.sanity FMD max age: 30s FMD before: 7 oleg329-server: error: read_param: '/proc/fs/lustre/obdfilter/lustre-OST0000/exports/192.168.203.29@tcp/fmd_count': No such device pdsh@oleg329-client: oleg329-server: ssh exited with exit code 19 FMD after: 0 PASS 36g (44s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 36h: utime on file racing with OST BRW write ==================================================================== 20:50:11 (1713487811) striped dir -i0 -c2 -H crush /mnt/lustre/d36h.sanity fail_loc=0x80000227 Thu Apr 18 20:50:11 EDT 2024 1713487811 Thu Apr 18 20:50:12 EDT 2024 1713487812 PASS 36h (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 36i: change mtime on striped directory ==== 20:50:15 (1713487815) striped dir -i0 -c2 -H fnv_1a_64 /mnt/lustre/d36i.sanity PASS 36i (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 38: open a regular file with O_DIRECTORY should return -ENOTDIR ============================================================= 20:50:18 (1713487818) Error in opening file "/mnt/lustre/f38.sanity"(flags=O_DIRECTORY) 20: Not a directory PASS 38 (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 39a: mtime changed on create ============== 20:50:21 (1713487821) Succeed in opening file "/mnt/lustre/f39a.sanity2"(flags=O_CREAT) PASS 39a (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 39b: mtime change on open, link, unlink, rename ================================================================ 20:50:26 (1713487826) striped dir -i1 -c1 -H all_char /mnt/lustre/d39b.sanity repeat after cancel_lru_locks PASS 39b (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 39c: mtime change on rename ===================================================================================== 20:50:30 (1713487830) repeat after cancel_lru_locks PASS 39c (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 39d: create, utime, stat ======================================================================================== 20:50:36 (1713487836) repeat after cancel_lru_locks PASS 39d (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 39e: create, stat, utime, stat ================================================================================== 20:50:39 (1713487839) repeat after cancel_lru_locks PASS 39e (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 39f: create, stat, sleep, utime, stat =========================================================================== 20:50:42 (1713487842) repeat after cancel_lru_locks PASS 39f (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 39g: write, chmod, stat ========================================================================================= 20:50:47 (1713487847) repeat after cancel_lru_locks PASS 39g (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 39h: write, utime within one second, stat ======================================================================= 20:50:53 (1713487853) repeat after cancel_lru_locks PASS 39h (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 39i: write, rename, stat ======================================================================================== 20:50:57 (1713487857) repeat after cancel_lru_locks PASS 39i (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 39j: write, rename, close, stat ================================================================================= 20:51:01 (1713487861) debug=-1 debug_mb=150 debug=-1 debug_mb=150 fail_loc=0x80000412 multiop /mnt/lustre/f39j.sanity voO_RDWR:w2097152_c TMPPIPE=/tmp/multiop_open_wait_pipe.7531 repeat after cancel_lru_locks fail_loc=0 debug_mb=21 debug_mb=21 debug=trace inode super iotrace malloc cache info ioctl neterror net warning buffs other dentry nettrace page dlmtrace error emerg ha rpctrace vfstrace reada mmap config console quota sec lfsck hsm snapshot layout debug=trace inode super iotrace malloc cache info ioctl neterror net warning buffs other dentry nettrace page dlmtrace error emerg ha rpctrace vfstrace reada mmap config console quota sec lfsck hsm snapshot layout PASS 39j (9s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 39k: write, utime, close, stat ================================================================================== 20:51:11 (1713487871) multiop /mnt/lustre/f39k.sanity voO_RDWR:w2097152_c TMPPIPE=/tmp/multiop_open_wait_pipe.7531 repeat after cancel_lru_locks PASS 39k (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 39l: directory atime update ===================================================================================== 20:51:16 (1713487876) PASS 39l (9s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 39m: test atime and mtime before 1970 ===== 20:51:26 (1713487886) repeat after cancel_lru_locks PASS 39m (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 39n: check that O_NOATIME is honored ====== 20:51:31 (1713487891) 1+0 records in 1+0 records out PASS 39n (12s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 39o: directory cached attributes updated after create ========================================================== 20:51:45 (1713487905) a b PASS 39o (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 39p: remote directory cached attributes updated after create ================================================================== 20:51:48 (1713487908) striped dir -i1 -c2 -H crush /mnt/lustre/d39p.sanity/d39p.sanity striped dir -i1 -c2 -H crush2 /mnt/lustre/d39p.sanity/d39p.sanity/remote_dir1 striped dir -i1 -c2 -H crush2 /mnt/lustre/d39p.sanity/d39p.sanity/remote_dir2 remote_dir1 remote_dir2 PASS 39p (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 39r: lazy atime update on OST ============= 20:51:52 (1713487912) obdfilter.lustre-OST0000.atime_diff=5 obdfilter.lustre-OST0001.atime_diff=5 1+0 records in 1+0 records out 4096 bytes (4.1 kB) copied, 0.00318739 s, 1.3 MB/s 1+0 records in 1+0 records out 4096 bytes (4.1 kB) copied, 0.00638545 s, 641 kB/s client atime: 1713487923 OST atime: atime: 0x6621c033:00000000 -- Thu Apr 18 20:52:03 2024 obdfilter.lustre-OST0000.atime_diff=0 obdfilter.lustre-OST0001.atime_diff=0 PASS 39r (13s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 39q: close won't zero out atime =========== 20:52:07 (1713487927) multiop /mnt/lustre/d39q.sanity vD_c TMPPIPE=/tmp/multiop_open_wait_pipe.7531 PASS 39q (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 39s: relatime is supported ================ 20:52:10 (1713487930) 192.168.203.129@tcp:/lustre /mnt/lustre lustre rw,checksum,flock,user_xattr,lruresize,lazystatfs,nouser_fid2path,verbose,noencrypt,statfs_project 0 0 Stopping client oleg329-client.virtnet /mnt/lustre (opts:) Starting client: oleg329-client.virtnet: -o relatime oleg329-server@tcp:/lustre /mnt/lustre 1+0 records in 1+0 records out Stopping client oleg329-client.virtnet /mnt/lustre (opts:) Starting client: oleg329-client.virtnet: -o user_xattr,flock oleg329-server@tcp:/lustre /mnt/lustre PASS 39s (8s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 40: failed open(O_TRUNC) doesn't truncate ======================================================================= 20:52:19 (1713487939) 1+0 records in 1+0 records out 4096 bytes (4.1 kB) copied, 0.00417051 s, 982 kB/s running as uid/gid/euid/egid 500/500/500/500, groups: [openfile] [-f] [O_WRONLY:O_TRUNC] [/mnt/lustre/f40.sanity] Error in opening file "/mnt/lustre/f40.sanity"(flags=O_WRONLY) 13: Permission denied /mnt/lustre/f40.sanity has type file OK /mnt/lustre/f40.sanity has size 4096 OK PASS 40 (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 41: test small file write + fstat =============================================================================== 20:52:22 (1713487942) First String: abcdefghijklmnopqr Second String: abcdefghiabcdefghijklmnopqr abcdefghiabcdefghijklmnopqr abcdefghiabcdefghijklmnopqr Pass! PASS 41 (3s) debug_raw_pointers=0 debug_raw_pointers=0 SKIP: sanity test_42a skipping ALWAYS excluded test 42a SKIP: sanity test_42b skipping ALWAYS excluded test 42b SKIP: sanity test_42c skipping ALWAYS excluded test 42c debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 42d: test complete truncate of file with cached dirty data ========================================================== 20:52:27 (1713487947) debug=+cache vm.dirty_writeback_centisecs = 0 vm.dirty_writeback_centisecs = 0 vm.dirty_ratio = 50 vm.dirty_background_ratio = 25 100+0 records in 100+0 records out 102400 bytes (102 kB) copied, 0.0233125 s, 4.4 MB/s vm.dirty_writeback_centisecs = 500 vm.dirty_background_ratio = 10 vm.dirty_ratio = 20 checking grant......UUID 1K-blocks Used Available Use% Mounted on lustre-MDT0000_UUID 1414116 7444 1280244 1% /mnt/lustre[MDT:0] lustre-MDT0001_UUID 1414116 6952 1280736 1% /mnt/lustre[MDT:1] lustre-OST0000_UUID 3833116 25744 3581248 1% /mnt/lustre[OST:0] lustre-OST0001_UUID 3833116 21580 3585412 1% /mnt/lustre[OST:1] filesystem_summary: 7666232 47324 7166660 1% /mnt/lustre pass grant check: client:33693696 server:33693696 PASS 42d (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 42e: verify sub-RPC writes are not done synchronously ========================================================== 20:52:31 (1713487951) striped dir -i0 -c2 -H fnv_1a_64 /mnt/lustre/d42e.sanitye total: 1000 open/close in 1.80 seconds: 556.08 ops/second total: 934 open/close in 1.69 seconds: 552.61 ops/second 1+0 records in 1+0 records out 489684992 bytes (490 MB) copied, 12.6875 s, 38.6 MB/s osc.lustre-OST0000-osc-ffff8800b5d40800.cur_dirty_bytes=0 osc.lustre-OST0000-osc-ffff8800b5d40800.cur_grant_bytes=489881600 osc.lustre-OST0000-osc-ffff8800b5d40800.cur_dirty_bytes=396361728 osc.lustre-OST0000-osc-ffff8800b5d40800.cur_grant_bytes=93741056 osc.lustre-OST0000-osc-ffff8800b5d40800.rpc_stats=0 osc.lustre-OST0000-osc-ffff8800b5d40800.rpc_stats= snapshot_time: 1713488003.320674280 secs.nsecs start_time: 1713488000.260497326 secs.nsecs elapsed_time: 3.060176954 secs.nsecs read RPCs in flight: 0 write RPCs in flight: 0 pending write pages: 0 pending read pages: 0 read write pages per rpc rpcs % cum % | rpcs % cum % 1: 0 0 0 | 0 0 0 2: 0 0 0 | 0 0 0 4: 0 0 0 | 0 0 0 8: 0 0 0 | 0 0 0 16: 0 0 0 | 33 8 8 32: 0 0 0 | 0 0 8 64: 0 0 0 | 0 0 8 128: 0 0 0 | 0 0 8 256: 0 0 0 | 375 91 100 read write rpcs in flight rpcs % cum % | rpcs % cum % 1: 0 0 0 | 50 12 12 2: 0 0 0 | 342 83 96 3: 0 0 0 | 16 3 100 read write offset rpcs % cum % | rpcs % cum % 0: 0 0 0 | 408 100 100 checking grant......UUID 1K-blocks Used Available Use% Mounted on lustre-MDT0000_UUID 1414116 7548 1280140 1% /mnt/lustre[MDT:0] lustre-MDT0001_UUID 1414116 7052 1280636 1% /mnt/lustre[MDT:1] lustre-OST0000_UUID 3833116 259220 3347800 8% /mnt/lustre[OST:0] lustre-OST0001_UUID 3833116 21584 3585436 1% /mnt/lustre[OST:1] filesystem_summary: 7666232 280804 6933236 4% /mnt/lustre pass grant check: client:498319360 server:498319360 PASS 42e (63s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 43A: execution of file opened for write should return -ETXTBSY ========================================================== 20:53:36 (1713488016) striped dir -i1 -c2 -H crush /mnt/lustre/d43A.sanity /home/green/git/lustre-release/lustre/tests/sanity.sh: line 5690: /mnt/lustre/d43A.sanity/f43A.sanity: Text file busy PASS 43A (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 43a: open(RDWR) of file being executed should return -ETXTBSY ========================================================== 20:53:40 (1713488020) striped dir -i1 -c2 -H fnv_1a_64 /mnt/lustre/d43a.sanity open(O_RDWR|O_CREAT): Text file busy /home/green/git/lustre-release/lustre/tests/test-framework.sh: line 4697: 4768 Terminated $DIR/$tdir/sleep 60 (wd: ~) PASS 43a (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 43b: truncate of file being executed should return -ETXTBSY ========================================================== 20:53:44 (1713488024) striped dir -i1 -c2 -H all_char /mnt/lustre/d43b.sanity truncate: cannot truncate '/mnt/lustre/d43b.sanity/sleep' to length 0: Text file busy /home/green/git/lustre-release/lustre/tests/test-framework.sh: line 4697: 5519 Terminated $DIR/$tdir/sleep 60 (wd: ~) PASS 43b (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 43c: md5sum of copy into lustre =========== 20:53:49 (1713488029) striped dir -i1 -c2 -H crush2 /mnt/lustre/d43c.sanity bash: OK PASS 43c (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 44A: zero length read from a sparse stripe ========================================================== 20:53:52 (1713488032) 1+0 records in 1+0 records out 4096 bytes (4.1 kB) copied, 0.00319188 s, 1.3 MB/s 1+0 records in 1+0 records out 4096 bytes (4.1 kB) copied, 0.00442312 s, 926 kB/s PASS 44A (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 44a: test sparse pwrite ========================================================================================= 20:53:55 (1713488035) --------writing /mnt/lustre/d44a-8388608 at 8388608 --------writing /mnt/lustre/d44a-10485760 at 10485760 --------writing /mnt/lustre/d44a-12582911 at 12582911 PASS 44a (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 45: osc io page accounting ====================================================================================== 20:53:59 (1713488039) vm.dirty_writeback_centisecs = 0 vm.dirty_writeback_centisecs = 0 vm.dirty_ratio = 50 vm.dirty_background_ratio = 25 executing "echo blah > /mnt/lustre/f45" before 0, after 4096 executing "> /mnt/lustre/f45" before 4096, after 0 executing "echo blah > /mnt/lustre/f45" before 0, after 4096 executing "sync" before 4096, after 0 executing "echo blah > /mnt/lustre/f45" before 0, after 4096 executing "cancel_lru_locks osc" before 4096, after 0 vm.dirty_writeback_centisecs = 500 vm.dirty_background_ratio = 10 vm.dirty_ratio = 20 PASS 45 (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 46: dirtying a previously written page ========================================================================== 20:54:02 (1713488042) vm.dirty_writeback_centisecs = 0 vm.dirty_writeback_centisecs = 0 vm.dirty_ratio = 50 vm.dirty_background_ratio = 25 1+0 records in 1+0 records out 4096 bytes (4.1 kB) copied, 0.00260929 s, 1.6 MB/s 1+0 records in 1+0 records out 4096 bytes (4.1 kB) copied, 0.00463457 s, 884 kB/s 1+0 records in 1+0 records out 4096 bytes (4.1 kB) copied, 0.000971646 s, 4.2 MB/s vm.dirty_writeback_centisecs = 500 vm.dirty_background_ratio = 10 vm.dirty_ratio = 20 PASS 46 (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 48a: Access renamed working dir (should return errors)=========================================================== 20:54:05 (1713488045) striped dir -i0 -c2 -H all_char /mnt/lustre/d48a.sanity striped dir -i0 -c2 -H fnv_1a_64 /mnt/lustre/d48a.sanity striped dir -i0 -c2 -H crush2 bar striped dir -i0 -c2 -H crush .bar mkdir: cannot create directory '.': File exists rmdir: failed to remove '.': Invalid argument PASS 48a (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 48b: Access removed working dir (should return errors)=========================================================== 20:54:09 (1713488049) striped dir -i0 -c2 -H fnv_1a_64 /mnt/lustre/d48b.sanity touch: cannot touch 'foo': No such file or directory mkdir: cannot create directory 'foo': No such file or directory touch: cannot touch '.foo': No such file or directory mkdir: cannot create directory '.foo': No such file or directory ls: cannot access .: No such file or directory mkdir: cannot create directory '.': File exists rmdir: failed to remove '.': Invalid argument ln: failed to create symbolic link 'foo': No such file or directory PASS 48b (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 48c: Access removed working subdir (should return errors) ========================================================== 20:54:12 (1713488052) striped dir -i0 -c2 -H crush /mnt/lustre/d48c.sanity/dir touch: cannot touch 'foo': No such file or directory mkdir: cannot create directory 'foo': No such file or directory touch: cannot touch '.foo': No such file or directory mkdir: cannot create directory '.foo': No such file or directory ls: cannot access .: No such file or directory mkdir: cannot create directory '.': File exists rmdir: failed to remove '.': Invalid argument ln: failed to create symbolic link 'foo': No such file or directory PASS 48c (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 48d: Access removed parent subdir (should return errors) ========================================================== 20:54:15 (1713488055) striped dir -i0 -c2 -H fnv_1a_64 /mnt/lustre/d48d.sanity/dir touch: cannot touch 'foo': No such file or directory mkdir: cannot create directory 'foo': No such file or directory touch: cannot touch '.foo': No such file or directory mkdir: cannot create directory '.foo': No such file or directory ls: cannot access .: No such file or directory ls: cannot access ..: No such file or directory mkdir: cannot create directory '.': File exists rmdir: failed to remove '.': Invalid argument ln: failed to create symbolic link 'foo': No such file or directory PASS 48d (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 48e: Access to recreated parent subdir (should return errors) ========================================================== 20:54:18 (1713488058) striped dir -i0 -c2 -H fnv_1a_64 /mnt/lustre/d48e.sanity/dir touch: cannot touch '../foo': No such file or directory PASS 48e (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 48f: non-zero nlink dir unlink won't LBUG() ========================================================== 20:54:21 (1713488061) SKIP: sanity test_48f needs different host for mdt1 mdt2 SKIP 48f (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 49: Change max_pages_per_rpc won't break osc extent ========================================================== 20:54:24 (1713488064) 0+0 records in 0+0 records out 0 bytes (0 B) copied, 0.00400128 s, 0.0 kB/s osc.lustre-OST0000-osc-ffff8800b5d40800.max_pages_per_rpc=1024 PASS 49 (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 50: special situations: /proc symlinks ========================================================================= 20:54:29 (1713488069) striped dir -i0 -c2 -H all_char /mnt/lustre/d50.sanity anaconda-ks.cfg stress.sh PASS 50 (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 51a: special situations: split htree with empty entry ============================================================ 20:54:34 (1713488074) striped dir -i1 -c1 -H all_char /mnt/lustre/d51a.sanity total: 201 create in 0.37 seconds: 543.67 ops/second PASS 51a (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 51b: exceed 64k subdirectory nlink limit on create, verify unlink ========================================================== 20:54:40 (1713488080) UUID 1K-blocks Used Available Use% Mounted on lustre-MDT0000_UUID 1414116 7700 1279988 1% /mnt/lustre[MDT:0] lustre-MDT0001_UUID 1414116 7184 1280504 1% /mnt/lustre[MDT:1] lustre-OST0000_UUID 3833116 25836 3581184 1% /mnt/lustre[OST:0] lustre-OST0001_UUID 3833116 22648 3584372 1% /mnt/lustre[OST:1] filesystem_summary: 7666232 48484 7165556 1% /mnt/lustre UUID Inodes IUsed IFree IUse% Mounted on lustre-MDT0000_UUID 1024000 4056 1019944 1% /mnt/lustre[MDT:0] lustre-MDT0001_UUID 1024000 2002 1021998 1% /mnt/lustre[MDT:1] lustre-OST0000_UUID 262144 2708 259436 2% /mnt/lustre[OST:0] lustre-OST0001_UUID 262144 2733 259411 2% /mnt/lustre[OST:1] filesystem_summary: 524905 6058 518847 2% /mnt/lustre - mkdir 6477 (time 1713488091.94 total 10.00 last 647.60) - mkdir 10000 (time 1713488096.62 total 14.68 last 752.54) - mkdir 18471 (time 1713488106.62 total 24.68 last 846.95) - mkdir 20000 (time 1713488109.56 total 27.62 last 521.01) - mkdir 25660 (time 1713488119.56 total 37.62 last 565.95) - mkdir 30000 (time 1713488124.39 total 42.45 last 899.08) - mkdir 39575 (time 1713488134.39 total 52.45 last 957.47) - mkdir 48940 (time 1713488144.39 total 62.45 last 936.48) - mkdir 58091 (time 1713488154.39 total 72.45 last 915.05) total: 65636 mkdir in 80.41 seconds: 816.24 ops/second UUID 1K-blocks Used Available Use% Mounted on lustre-MDT0000_UUID 1414116 7700 1279988 1% /mnt/lustre[MDT:0] lustre-MDT0001_UUID 1414116 275800 1011888 22% /mnt/lustre[MDT:1] lustre-OST0000_UUID 3833116 25836 3581184 1% /mnt/lustre[OST:0] lustre-OST0001_UUID 3833116 22648 3584372 1% /mnt/lustre[OST:1] filesystem_summary: 7666232 48484 7165556 1% /mnt/lustre UUID Inodes IUsed IFree IUse% Mounted on lustre-MDT0000_UUID 1024000 4056 1019944 1% /mnt/lustre[MDT:0] lustre-MDT0001_UUID 1024000 67638 956362 7% /mnt/lustre[MDT:1] lustre-OST0000_UUID 262144 2708 259436 2% /mnt/lustre[OST:0] lustre-OST0001_UUID 262144 2733 259411 2% /mnt/lustre[OST:1] filesystem_summary: 590541 71694 518847 13% /mnt/lustre nlink before: 1, created before: 65636 - unlinked 0 (time 1713488163 ; total 0 ; last 0) - unlinked 10000 (time 1713488184 ; total 21 ; last 21) - unlinked 20000 (time 1713488203 ; total 40 ; last 19) - unlinked 30000 (time 1713488221 ; total 58 ; last 18) - unlinked 40000 (time 1713488242 ; total 79 ; last 21) - unlinked 50000 (time 1713488262 ; total 99 ; last 20) - unlinked 60000 (time 1713488283 ; total 120 ; last 21) total: 65536 unlinks in 131 seconds: 500.274811 unlinks/second nlink between: 1 - unlinked 0 (time 1713488294 ; total 0 ; last 0) total: 100 unlinks in 1 seconds: 100.000000 unlinks/second nlink after: 1 UUID 1K-blocks Used Available Use% Mounted on lustre-MDT0000_UUID 1414116 7700 1279988 1% /mnt/lustre[MDT:0] lustre-MDT0001_UUID 1414116 13256 1274432 2% /mnt/lustre[MDT:1] lustre-OST0000_UUID 3833116 25836 3581184 1% /mnt/lustre[OST:0] lustre-OST0001_UUID 3833116 22648 3584372 1% /mnt/lustre[OST:1] filesystem_summary: 7666232 48484 7165556 1% /mnt/lustre UUID Inodes IUsed IFree IUse% Mounted on lustre-MDT0000_UUID 1024000 4056 1019944 1% /mnt/lustre[MDT:0] lustre-MDT0001_UUID 1024000 2002 1021998 1% /mnt/lustre[MDT:1] lustre-OST0000_UUID 262144 2708 259436 2% /mnt/lustre[OST:0] lustre-OST0001_UUID 262144 2733 259411 2% /mnt/lustre[OST:1] filesystem_summary: 524905 6058 518847 2% /mnt/lustre PASS 51b (216s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 51d: check LOV round-robin OST object distribution ========================================================== 20:58:18 (1713488298) SKIP: sanity test_51d needs >= 3 OSTs SKIP 51d (1s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 51e: check file nlink limit =============== 20:58:20 (1713488300) striped dir -i1 -c1 -H crush2 /mnt/lustre/d51e.sanity striped dir -i1 -c1 -H all_char /mnt/lustre/d51e.sanity/d0 - link 4632 (time 1713488312.07 total 10.00 last 463.14) - link 8765 (time 1713488322.07 total 20.00 last 413.23) - link 10000 (time 1713488325.10 total 23.03 last 407.66) - link 13622 (time 1713488335.10 total 33.03 last 362.20) - link 18262 (time 1713488345.10 total 43.03 last 463.99) - link 20000 (time 1713488348.69 total 46.62 last 483.87) - link 24889 (time 1713488358.69 total 56.63 last 488.82) - link 28523 (time 1713488368.69 total 66.63 last 363.33) - link 30000 (time 1713488372.58 total 70.51 last 380.33) - link 33833 (time 1713488382.58 total 80.51 last 383.24) - link 37722 (time 1713488392.58 total 90.51 last 388.86) - link 40000 (time 1713488399.03 total 96.97 last 352.92) - link 42799 (time 1713488409.03 total 106.97 last 279.88) - link 47054 (time 1713488419.04 total 116.97 last 425.47) - link 48711 (time 1713488429.04 total 126.97 last 165.66) - link 50000 (time 1713488438.71 total 136.65 last 133.26) - link 52468 (time 1713488448.71 total 146.65 last 246.77) - link 55084 (time 1713488458.71 total 156.65 last 261.57) - link 59099 (time 1713488468.72 total 166.65 last 401.39) - link 60000 (time 1713488473.00 total 170.94 last 210.10) - link 61492 (time 1713488483.01 total 180.95 last 149.06) - link 63572 (time 1713488493.02 total 190.95 last 207.93) link(/mnt/lustre/d51e.sanity/d0/foo, /mnt/lustre/d51e.sanity/d0/f-64999) error: Too many links total: 64999 link in 196.27 seconds: 331.17 ops/second - unlinked 0 (time 1713488499 ; total 0 ; last 0) - unlinked 10000 (time 1713488520 ; total 21 ; last 21) - unlinked 20000 (time 1713488551 ; total 52 ; last 31) - unlinked 30000 (time 1713488570 ; total 71 ; last 19) - unlinked 40000 (time 1713488588 ; total 89 ; last 18) - unlinked 50000 (time 1713488605 ; total 106 ; last 17) - unlinked 60000 (time 1713488625 ; total 126 ; last 20) unlink(/mnt/lustre/d51e.sanity/d0/f-64999) error: No such file or directory total: 64999 unlinks in 136 seconds: 477.933838 unlinks/second PASS 51e (337s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 51f: check many open files limit ========== 21:03:58 (1713488638) striped dir -i1 -c2 -H all_char /mnt/lustre/d51f.sanity MDT1 numfree=1021992, max=100000 changed ulimit from 1024 to 100020 - open/keep 6017 (time 1713488650.22 total 10.00 last 601.69) - open/keep 10000 (time 1713488656.08 total 15.87 last 678.99) - open/keep 16724 (time 1713488666.08 total 25.87 last 672.35) - open/keep 20000 (time 1713488670.91 total 30.69 last 678.87) - open/keep 25798 (time 1713488680.91 total 40.69 last 579.78) - open/keep 30000 (time 1713488686.42 total 46.21 last 761.78) - open/keep 37173 (time 1713488696.42 total 56.21 last 717.30) - open/keep 40000 (time 1713488700.61 total 60.39 last 675.74) - open/keep 46281 (time 1713488710.61 total 70.39 last 628.03) - open/keep 50000 (time 1713488715.71 total 75.49 last 729.64) - open/keep 57076 (time 1713488725.71 total 85.49 last 707.53) - open/keep 60000 (time 1713488732.49 total 92.27 last 431.16) - open/keep 64010 (time 1713488742.49 total 102.27 last 400.99) - open/keep 68206 (time 1713488752.49 total 112.28 last 419.53) - open/keep 70000 (time 1713488756.91 total 116.69 last 406.15) total: 71359 open/keep in 120.00 seconds: 594.65 ops/second - closed 5697 (time 1713488770.22 total 10.00 last -6429.80) - closed 10000 (time 1713488776.85 total 16.63 last 649.27) - closed 15752 (time 1713488786.85 total 26.63 last 575.20) - closed 20000 (time 1713488793.45 total 33.24 last 642.94) - closed 26094 (time 1713488803.45 total 43.24 last 609.35) - closed 30000 (time 1713488809.13 total 48.91 last 688.28) - closed 36836 (time 1713488819.13 total 58.91 last 683.49) - closed 40000 (time 1713488823.83 total 63.61 last 673.12) - closed 47479 (time 1713488833.83 total 73.61 last 747.89) - closed 50000 (time 1713488837.67 total 77.45 last 657.11) - closed 55868 (time 1713488847.67 total 87.45 last 586.72) - closed 60000 (time 1713488854.85 total 94.63 last 575.49) - closed 65693 (time 1713488864.85 total 104.63 last 569.28) - closed 70000 (time 1713488871.95 total 111.74 last 606.25) total: 71359 close in 113.80 seconds: 627.07 close/second - unlinked 0 (time 1713488875 ; total 0 ; last 0) - unlinked 10000 (time 1713488898 ; total 23 ; last 23) - unlinked 20000 (time 1713488920 ; total 45 ; last 22) - unlinked 30000 (time 1713488941 ; total 66 ; last 21) - unlinked 40000 (time 1713488964 ; total 89 ; last 23) - unlinked 50000 (time 1713488984 ; total 109 ; last 20) - unlinked 60000 (time 1713489004 ; total 129 ; last 20) - unlinked 70000 (time 1713489026 ; total 151 ; last 22) unlink(/mnt/lustre/d51f.sanity/f71359) error: No such file or directory total: 71359 unlinks in 154 seconds: 463.370117 unlinks/second PASS 51f (393s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 52a: append-only flag test (should return errors) ========================================================== 21:10:33 (1713489033) striped dir -i0 -c2 -H crush /mnt/lustre/d52a.sanity cp: cannot create regular file '/mnt/lustre/d52a.sanity/foo': Operation not permitted rename '/mnt/lustre/d52a.sanity/foo' returned -1: Operation not permitted PASS 52a (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 52b: immutable flag test (should return errors) ================================================================= 21:10:38 (1713489038) striped dir -i0 -c2 -H crush /mnt/lustre/d52b.sanity /home/green/git/lustre-release/lustre/tests/sanity.sh: line 6258: /mnt/lustre/d52b.sanity/foo: Permission denied cp: cannot create regular file '/mnt/lustre/d52b.sanity/foo': Permission denied /home/green/git/lustre-release/lustre/tests/sanity.sh: line 6263: /mnt/lustre/d52b.sanity/foo: Permission denied rename '/mnt/lustre/d52b.sanity/foo' returned -1: Operation not permitted PASS 52b (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 53: verify that MDS and OSTs agree on pre-creation ============================================================== 21:10:43 (1713489043) lustre-OST0000.last_id=0x280000bd1:20225; MDS.last_id=20225 lustre-OST0001.last_id=0x2c0000403:20225; MDS.last_id=20225 PASS 53 (5s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 54a: unix domain socket test ============== 21:10:50 (1713489050) /home/green/git/lustre-release/lustre/tests/socketserver 23832: server started on /mnt/lustre/socket at Thu Apr 18 21:10:51 EDT 2024 /home/green/git/lustre-release/lustre/tests/socketserver 23842: connection on /mnt/lustre/socket at Thu Apr 18 21:10:51 EDT 2024 /home/green/git/lustre-release/lustre/tests/socketclient 23845: connection on /mnt/lustre/socket at Thu Apr 18 21:10:51 EDT 2024 Message: This is a message from the server! PASS 54a (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 54b: char device works in lustre ================================================================================ 21:10:55 (1713489055) 1+0 records in 1+0 records out 4096 bytes (4.1 kB) copied, 0.000343384 s, 11.9 MB/s PASS 54b (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 54c: block device works in lustre =============================================================================== 21:11:00 (1713489060) make a loop file system with /mnt/lustre/f54c.sanity on /mnt/lustre/loop54c (3). 1+0 records in 1+0 records out 4096 bytes (4.1 kB) copied, 0.00498708 s, 821 kB/s mke2fs 1.46.2.wc5 (26-Mar-2022) Discarding device blocks: 1024/4100 done Creating filesystem with 4100 1k blocks and 1032 inodes Allocating group tables: 0/1 done Writing inode tables: 0/1 done Writing superblocks and filesystem accounting information: 0/1 done striped dir -i0 -c2 -H all_char /mnt/lustre/d54c.sanity 30+0 records in 30+0 records out 122880 bytes (123 kB) copied, 0.00146575 s, 83.8 MB/s Filesystem 1K-blocks Used Available Use% Mounted on /mnt/lustre/loop54c 3950 135 3610 4% /mnt/lustre/d54c.sanity 30+0 records in 30+0 records out 122880 bytes (123 kB) copied, 0.000272368 s, 451 MB/s losetup: /mnt/lustre/loop54c: detach failed: No such device or address losetup: /dev/loop3: detach failed: No such device or address PASS 54c (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 54d: fifo device works in lustre ================================================================================ 21:11:06 (1713489066) PASS 54d (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 54e: console/tty device works in lustre ================================================================================ 21:11:10 (1713489070) PASS 54e (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 55a: OBD device life cycle unit tests ===== 21:11:14 (1713489074) kunit/obd_test options: 'verbose=2' Devices: 0 UP mgc MGC192.168.203.129@tcp 45590e09-042d-4a0d-bb8e-50238d7429be 6 1 UP lov lustre-clilov-ffff8800b5d40800 32604e58-40e3-410e-b15a-eec390f47c34 5 2 UP lmv lustre-clilmv-ffff8800b5d40800 32604e58-40e3-410e-b15a-eec390f47c34 6 3 UP mdc lustre-MDT0000-mdc-ffff8800b5d40800 32604e58-40e3-410e-b15a-eec390f47c34 6 4 UP mdc lustre-MDT0001-mdc-ffff8800b5d40800 32604e58-40e3-410e-b15a-eec390f47c34 6 5 UP osc lustre-OST0000-osc-ffff8800b5d40800 32604e58-40e3-410e-b15a-eec390f47c34 6 6 UP osc lustre-OST0001-osc-ffff8800b5d40800 32604e58-40e3-410e-b15a-eec390f47c34 6 7 UP obd_test obd_name obd_uuid 4 PASS 55a (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 55b: Load and unload max OBD devices ====== 21:11:19 (1713489079) Load time: 138 Devices: 23990 UP obd_test obd_name_23984 obd_uuid_23984 4 23991 UP obd_test obd_name_23985 obd_uuid_23985 4 23992 UP obd_test obd_name_23986 obd_uuid_23986 4 23993 UP obd_test obd_name_23987 obd_uuid_23987 4 23994 UP obd_test obd_name_23988 obd_uuid_23988 4 23995 UP obd_test obd_name_23989 obd_uuid_23989 4 23996 UP obd_test obd_name_23990 obd_uuid_23990 4 23997 UP obd_test obd_name_23991 obd_uuid_23991 4 23998 UP obd_test obd_name_23992 obd_uuid_23992 4 23999 UP obd_test obd_name_23993 obd_uuid_23993 4 Unload time: 144 PASS 55b (147s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 56a: check /home/green/git/lustre-release/lustre/utils/lfs getstripe ========================================================== 21:13:48 (1713489228) striped dir -i0 -c2 -H crush2 /mnt/lustre/d56a.sanity/dir /home/green/git/lustre-release/lustre/utils/lfs getstripe showed obdidx or l_ost_idx /home/green/git/lustre-release/lustre/utils/lfs getstripe file1 passed /home/green/git/lustre-release/lustre/utils/lfs getstripe --verbose passed /home/green/git/lustre-release/lustre/utils/lfs getstripe --fid passed /home/green/git/lustre-release/lustre/utils/lfs getstripe --obd passed PASS 56a (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 56b: check /home/green/git/lustre-release/lustre/utils/lfs getdirstripe ========================================================== 21:13:54 (1713489234) striped dir -i0 -c2 -H fnv_1a_64 /mnt/lustre/d56b.sanity striped dir -i0 -c2 -H crush /mnt/lustre/d56b.sanity/dir1 striped dir -i0 -c2 -H all_char /mnt/lustre/d56b.sanity/dir2 striped dir -i0 -c2 -H fnv_1a_64 /mnt/lustre/d56b.sanity/dir3 PASS 56b (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 56bb: check /home/green/git/lustre-release/lustre/utils/lfs getdirstripe layout is YAML ========================================================== 21:13:59 (1713489239) lmv_stripe_count: 1 lmv_stripe_offset: -1 lmv_hash_type: none lmv_max_inherit: -1 lmv_max_inherit_rr: 3 PASS 56bb (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 56c: check 'lfs df' showing device status ========================================================== 21:14:04 (1713489244) PASS 56c (25s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 56d: 'lfs df -v' prints only configured devices ========================================================== 21:14:32 (1713489272) UUID 1K-blocks Used Available Use% Mounted on lustre-MDT0000_UUID 1414116 13360 1274328 2% /mnt/lustre[MDT:0] lustre-MDT0001_UUID 1414116 20500 1267188 2% /mnt/lustre[MDT:1] lustre-OST0000_UUID 3833116 26348 3580672 1% /mnt/lustre[OST:0] lustre-OST0001_UUID 3833116 23164 3583828 1% /mnt/lustre[OST:1] filesystem_summary: 7666232 49512 7164500 1% /mnt/lustre PASS 56d (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 56e: 'lfs df' Handle non LustreFS & multiple LustreFS ========================================================== 21:14:36 (1713489276) PASS 56e (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 56g: check lfs find -name ================= 21:14:40 (1713489280) striped dir -i0 -c2 -H crush /mnt/lustre/d56g.sanity striped dir -i0 -c2 -H crush2 /mnt/lustre/d56g.sanity/dir1 striped dir -i0 -c2 -H all_char /mnt/lustre/d56g.sanity/dir2 striped dir -i0 -c2 -H crush /mnt/lustre/d56g.sanity/dir3 PASS 56g (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 56h: check lfs find ! -name =============== 21:14:46 (1713489286) PASS 56h (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 56i: check 'lfs find -ost UUID' skips directories ========================================================== 21:14:50 (1713489290) striped dir -i0 -c2 -H crush /mnt/lustre/d56i.sanity PASS 56i (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 56j: check lfs find -type d =============== 21:14:55 (1713489295) PASS 56j (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 56k: check lfs find -type f =============== 21:15:00 (1713489300) PASS 56k (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 56l: check lfs find -type b =============== 21:15:04 (1713489304) PASS 56l (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 56m: check lfs find -type c =============== 21:15:09 (1713489309) PASS 56m (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 56n: check lfs find -type l =============== 21:15:14 (1713489314) PASS 56n (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 56o: check lfs find -mtime for old files == 21:15:19 (1713489319) striped dir -i0 -c2 -H all_char /mnt/lustre/d56o.sanity striped dir -i0 -c2 -H crush2 /mnt/lustre/d56o.sanity/dir1 striped dir -i0 -c2 -H crush2 /mnt/lustre/d56o.sanity/dir2 striped dir -i0 -c2 -H all_char /mnt/lustre/d56o.sanity/dir3 1+0 records in 1+0 records out 512 bytes (512 B) copied, 0.00827332 s, 61.9 kB/s PASS 56o (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 56ob: check lfs find -atime -mtime -ctime with units ========================================================== 21:15:25 (1713489325) striped dir -i0 -c2 -H fnv_1a_64 /mnt/lustre/d56ob.sanity PASS 56ob (5s) debug_raw_pointers=0 debug_raw_pointers=0 SKIP: sanity test_56oc skipping excluded test 56oc debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 56od: check lfs find -btime with units ==== 21:15:33 (1713489333) striped dir -i0 -c1 -H fnv_1a_64 /mnt/lustre/d56od.sanity/d.btime striped dir -i0 -c1 -H all_char /mnt/lustre/d56od.sanity/d.btime/dir1 striped dir -i0 -c1 -H all_char /mnt/lustre/d56od.sanity/d.btime/dir2 striped dir -i0 -c1 -H crush2 /mnt/lustre/d56od.sanity/d.btime/dir3 Clock skew between client and server: 1, age:5 PASS 56od (9s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 56p: check lfs find -uid and ! -uid ======= 21:15:44 (1713489344) striped dir -i0 -c2 -H fnv_1a_64 /mnt/lustre/d56p.sanity striped dir -i0 -c2 -H crush /mnt/lustre/d56p.sanity/dir1 striped dir -i0 -c2 -H crush /mnt/lustre/d56p.sanity/dir2 striped dir -i0 -c2 -H crush /mnt/lustre/d56p.sanity/dir3 PASS 56p (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 56q: check lfs find -gid and ! -gid ======= 21:15:49 (1713489349) striped dir -i0 -c2 -H fnv_1a_64 /mnt/lustre/d56q.sanity striped dir -i0 -c2 -H crush2 /mnt/lustre/d56q.sanity/dir1 striped dir -i0 -c2 -H crush2 /mnt/lustre/d56q.sanity/dir2 striped dir -i0 -c2 -H crush2 /mnt/lustre/d56q.sanity/dir3 PASS 56q (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 56r: check lfs find -size works =========== 21:15:55 (1713489355) striped dir -i0 -c2 -H fnv_1a_64 /mnt/lustre/d56r.sanity striped dir -i0 -c2 -H fnv_1a_64 /mnt/lustre/d56r.sanity/dir1 striped dir -i0 -c2 -H crush2 /mnt/lustre/d56r.sanity/dir2 striped dir -i0 -c2 -H fnv_1a_64 /mnt/lustre/d56r.sanity/dir3 PASS 56r (5s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 56ra: check lfs find -size -lazy works for data on OSTs ========================================================== 21:16:02 (1713489362) striped dir -i0 -c2 -H fnv_1a_64 /mnt/lustre/d56ra.sanity striped dir -i0 -c2 -H crush2 /mnt/lustre/d56ra.sanity/dir1 striped dir -i0 -c2 -H crush2 /mnt/lustre/d56ra.sanity/dir2 striped dir -i0 -c2 -H all_char /mnt/lustre/d56ra.sanity/dir3 PASS 56ra (6s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 56rb: check lfs find --size --ost/--mdt works ========================================================== 21:16:10 (1713489370) striped dir -i0 -c2 -H crush /mnt/lustre/d56rb.sanity 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.051608 s, 20.3 MB/s /mnt/lustre/d56rb.sanity/f56rb.sanity /mnt/lustre/d56rb.sanity/f56rb.sanity PASS 56rb (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 56rc: check lfs find --mdt-count/--mdt-hash works ========================================================== 21:16:15 (1713489375) striped dir -i0 -c2 -H all_char /mnt/lustre/d56rc.sanity PASS 56rc (5s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 56rd: check lfs find --printf special files ========================================================== 21:16:22 (1713489382) striped dir -i0 -c2 -H fnv_1a_64 /mnt/lustre/d56rd.sanity /mnt/lustre/d56rd.sanity/fifo p -1 /mnt/lustre/d56rd.sanity/chardev c -1 PASS 56rd (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 56s: check lfs find -stripe-count works === 21:16:27 (1713489387) striped dir -i0 -c2 -H all_char /mnt/lustre/d56s.sanity striped dir -i0 -c2 -H crush2 /mnt/lustre/d56s.sanity/dir1 striped dir -i0 -c2 -H crush /mnt/lustre/d56s.sanity/dir2 striped dir -i0 -c2 -H all_char /mnt/lustre/d56s.sanity/dir3 PASS 56s (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 56t: check lfs find -stripe-size works ==== 21:16:33 (1713489393) striped dir -i0 -c2 -H fnv_1a_64 /mnt/lustre/d56t.sanity striped dir -i0 -c2 -H crush /mnt/lustre/d56t.sanity/dir1 striped dir -i0 -c2 -H crush2 /mnt/lustre/d56t.sanity/dir2 striped dir -i0 -c2 -H crush /mnt/lustre/d56t.sanity/dir3 striped dir -i0 -c2 -H crush2 /mnt/lustre/d56t.sanity striped dir -i0 -c2 -H all_char /mnt/lustre/d56t.sanity/dir1 striped dir -i0 -c2 -H fnv_1a_64 /mnt/lustre/d56t.sanity/dir2 striped dir -i0 -c2 -H crush2 /mnt/lustre/d56t.sanity/dir3 PASS 56t (5s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 56u: check lfs find -stripe-index works === 21:16:40 (1713489400) striped dir -i0 -c2 -H fnv_1a_64 /mnt/lustre/d56u.sanity striped dir -i0 -c2 -H all_char /mnt/lustre/d56u.sanity/dir1 striped dir -i0 -c2 -H all_char /mnt/lustre/d56u.sanity/dir2 striped dir -i0 -c2 -H fnv_1a_64 /mnt/lustre/d56u.sanity/dir3 PASS 56u (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 56v: check 'lfs find -m match with lfs getstripe -m' ========================================================== 21:16:47 (1713489407) striped dir -i0 -c2 -H crush2 /mnt/lustre/d56v.sanity striped dir -i0 -c2 -H crush2 /mnt/lustre/d56v.sanity/dir1 striped dir -i0 -c2 -H fnv_1a_64 /mnt/lustre/d56v.sanity/dir2 striped dir -i0 -c2 -H crush2 /mnt/lustre/d56v.sanity/dir3 PASS 56v (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 56wa: check lfs_migrate -c stripe_count works ========================================================== 21:16:52 (1713489412) striped dir -i0 -c1 -H crush2 /mnt/lustre/d56wa.sanity striped dir -i0 -c1 -H all_char /mnt/lustre/d56wa.sanity/dir1 striped dir -i0 -c1 -H crush /mnt/lustre/d56wa.sanity/dir2 striped dir -i0 -c1 -H crush2 /mnt/lustre/d56wa.sanity/dir3 total: 200 link in 0.41 seconds: 486.61 ops/second /home/green/git/lustre-release/lustre/scripts/lfs_migrate -y -c 1 /mnt/lustre/d56wa.sanity/file1 /mnt/lustre/d56wa.sanity/file1: done /home/green/git/lustre-release/lustre/utils/lfs migrate -i 1 /mnt/lustre/d56wa.sanity/migr_1_ost /home/green/git/lustre-release/lustre/scripts/lfs_migrate -y -c 1 /mnt/lustre/d56wa.sanity/dir1 /mnt/lustre/d56wa.sanity/dir1/link154: done /mnt/lustre/d56wa.sanity/dir1/link195: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link104: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link181: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link157: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link7: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link43: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link107: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link78: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link113: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link139: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link163: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link137: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link162: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link136: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link152: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link94: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link170: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link35: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link138: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link5: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link83: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link109: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link53: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link156: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link2: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link49: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link149: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link84: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link118: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link58: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link187: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link133: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link148: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link197: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link79: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link24: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link161: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link134: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link66: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link129: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link114: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link142: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link102: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link185: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link89: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link174: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link106: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link75: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link15: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link183: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link45: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link128: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link72: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link99: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link20: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link90: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link86: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link96: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link47: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link39: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link147: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link116: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link93: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link73: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link119: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link56: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link37: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link71: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link13: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link140: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link67: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link151: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link122: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link115: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link186: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link191: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link92: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link46: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link176: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link51: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link52: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/file3: done /mnt/lustre/d56wa.sanity/dir1/link127: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link180: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link171: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link123: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link193: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link124: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link27: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link57: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link0: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link91: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link182: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link34: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link60: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link166: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link135: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link1: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link177: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link54: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link82: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link167: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/file2: done /mnt/lustre/d56wa.sanity/dir1/link4: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link146: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link150: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link32: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link169: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link194: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link158: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link77: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link23: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link36: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link42: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link8: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link29: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link65: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link164: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link38: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link108: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link10: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link143: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link81: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link64: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link30: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link196: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link6: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link26: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link132: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link70: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link175: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link105: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link97: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link12: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link126: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link33: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link179: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link25: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link80: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link17: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link22: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link44: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link178: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link189: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link69: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link188: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/file1: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link14: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link100: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link85: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link130: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link141: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link159: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link18: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link173: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link117: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link165: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link11: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link111: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link101: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link168: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link131: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link74: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link59: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link41: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link160: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link50: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link198: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link125: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link103: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link98: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link172: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link155: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link144: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link62: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link112: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link68: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link76: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link184: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link110: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link87: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link9: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link95: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link28: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link3: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link21: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link192: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link55: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link48: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link153: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link88: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link121: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link199: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link40: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link63: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link120: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link61: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link145: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link19: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link31: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link190: already migrated via another hard link /mnt/lustre/d56wa.sanity/dir1/link16: already migrated via another hard link /home/green/git/lustre-release/lustre/utils/lfs find -stripe_count 2 -type f /mnt/lustre/d56wa.sanity | /home/green/git/lustre-release/lustre/scripts/lfs_migrate -y -c 1 /mnt/lustre/d56wa.sanity/dir2/file3: done /mnt/lustre/d56wa.sanity/dir2/file2: done /mnt/lustre/d56wa.sanity/dir2/file1: done /mnt/lustre/d56wa.sanity/file3: done /mnt/lustre/d56wa.sanity/file2: done /mnt/lustre/d56wa.sanity/dir3/file3: done /mnt/lustre/d56wa.sanity/dir3/file2: done /mnt/lustre/d56wa.sanity/dir3/file1: done PASS 56wa (18s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 56wb: check lfs_migrate pool support ====== 21:17:12 (1713489432) Creating test dir...done. Creating test file...done. Detecting existing pools...none detected. Creating pool 'testpool'...done. Adding target to pool...done. Setting pool using -p option...done. Verifying test file is in pool after migrating...done. Removing test file from pool 'testpool'...done. Setting pool using --pool option...done. Destroy the created pools: testpool lustre.testpool PASS 56wb (10s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 56wc: check unrecognized options for lfs_migrate are passed through ========================================================== 21:17:24 (1713489444) Creating test dir...done Setting initial stripe for test file...done. 12+0 records in 12+0 records out 12582912 bytes (13 MB) copied, 0.363704 s, 34.6 MB/s Verifying incompatible options are detected...lfs_migrate error: option -R or -A cannot be used with -c, -S, or -p lfs_migrate error: option -R or -A cannot be used with -c, -S, or -p lfs_migrate error: option -R or -A cannot be used with -c, -S, or -p lfs_migrate error: option -R or -A cannot be used with -E eof -c 1 lfs_migrate error: option -R cannot be used with -A lfs_migrate error: option -R or -A cannot be used with -c, -S, or -p lfs_migrate error: option -R or -A cannot be used with -c, -S, or -p lfs_migrate error: option -R or -A cannot be used with -c, -S, or -p lfs_migrate error: option -R or -A cannot be used with -E eof -c 1 done. Verifying -S option is passed through to lfs migrate.../mnt/lustre/d56wc.sanity/f56wc.sanity: done done. Verifying long options supported.../mnt/lustre/d56wc.sanity/f56wc.sanity: done /mnt/lustre/d56wc.sanity/f56wc.sanity: done done. Verifying explicit stripe count can be set.../mnt/lustre/d56wc.sanity/f56wc.sanity: done done. Setting stripe for parent directory...done. Verifying restripe option uses parent stripe settings.../mnt/lustre/d56wc.sanity/f56wc.sanity: done done. Verifying striping size preserved when not specified.../mnt/lustre/d56wc.sanity/f56wc.sanity: done done. Verifying file name properly detected.../mnt/lustre/d56wc.sanity/f56wc.sanity: done done. Verifying PFL options passed through.../mnt/lustre/d56wc.sanity/f56wc.sanity: done done. PASS 56wc (8s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 56wd: check lfs_migrate --rsync and --no-rsync work ========================================================== 21:17:33 (1713489453) Creating test dir...striped dir -i0 -c2 -H crush2 /mnt/lustre/d56wd.sanity done. Creating test file...done. Make sure --no-rsync option works...done. Make sure --rsync option works...done. PASS 56wd (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 56we: check lfs_migrate --non-direct|-D support ========================================================== 21:17:37 (1713489457) striped dir -i0 -c2 -H all_char /mnt/lustre/d56we.sanity Make sure --non-direct|-D works...done. PASS 56we (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 56x: lfs migration support ================ 21:17:42 (1713489462) striped dir -i0 -c2 -H crush /mnt/lustre/d56x.sanity PASS 56x (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 56xa: lfs migration --block support ======= 21:17:47 (1713489467) striped dir -i0 -c2 -H crush2 /mnt/lustre/d56xa.sanity/56xa PASS 56xa (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 56xb: lfs migration hard link support ===== 21:17:52 (1713489472) striped dir -i0 -c2 -H fnv_1a_64 /mnt/lustre/d56xb.sanity testing lfs migrate mode when all links fit within xattrs creating initial file...done creating symlinks...done creating nonlinked files...done creating hard links 2:100...done checking number of hard links listed in xattrs...100 migrating files...done verifying files...done testing rsync mode when all links fit within xattrs checking number of hard links listed in xattrs...100 migrating files...done verifying files...done testing lfs migrate mode when all links do not fit within xattrs creating hard links 101:200...done checking number of hard links listed in xattrs...167 migrating files...done verifying files...done testing rsync mode when all links do not fit within xattrs checking number of hard links listed in xattrs...167 migrating files...done verifying files...done testing non-root lfs migrate mode when not all links are in xattr checking number of hard links listed in xattrs...167 migrating files...running as uid/gid/euid/egid 500/500/500/500, groups: [/home/green/git/lustre-release/lustre/scripts/lfs_migrate] [-S] [1m] [/mnt/lustre/d56xb.sanity] done verifying files...done PASS 56xb (88s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 56xc: lfs migration autostripe ============ 21:19:23 (1713489563) striped dir -i0 -c2 -H crush2 /mnt/lustre/d56xc.sanity Setting initial stripe for 20MB test file...done Sizing 20MB test file...done Verifying small file autostripe count is 1.../mnt/lustre/d56xc.sanity/20mb: done done Setting stripe for 1GB test file...done Sizing 1GB test file...done Migrating 1GB file.../mnt/lustre/d56xc.sanity/1gb: done done Verifying autostripe count is sqrt(n) + 1...done PASS 56xc (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 56xd: check lfs_migrate --yaml and --copy support ========================================================== 21:19:28 (1713489568) striped dir -i0 -c2 -H all_char /mnt/lustre/d56xd.sanity 4+0 records in 4+0 records out 4194304 bytes (4.2 MB) copied, 0.127776 s, 32.8 MB/s /mnt/lustre/d56xd.sanity/f56xd.sanity.mgrt: done /mnt/lustre/d56xd.sanity/f56xd.sanity.mgrt: done PASS 56xd (6s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 56xe: migrate a composite layout file ===== 21:19:36 (1713489576) striped dir -i0 -c2 -H crush2 /mnt/lustre/d56xe.sanity 4+0 records in 4+0 records out 4194304 bytes (4.2 MB) copied, 0.190119 s, 22.1 MB/s /mnt/lustre/d56xe.sanity/f56xe.sanity: done PASS 56xe (7s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 56xf: FID is not lost during migration of a composite layout file ========================================================== 21:19:45 (1713489585) striped dir -i0 -c2 -H all_char /mnt/lustre/d56xf.sanity 4+0 records in 4+0 records out 4194304 bytes (4.2 MB) copied, 0.191931 s, 21.9 MB/s PASS 56xf (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 56xg: lfs migrate pool support ============ 21:19:52 (1713489592) Creating new pool oleg329-server: Pool lustre.test_56xg_0 created Adding targets to pool oleg329-server: OST lustre-OST0000_UUID added to pool lustre.test_56xg_0 Creating new pool oleg329-server: Pool lustre.test_56xg_1 created Adding targets to pool oleg329-server: OST lustre-OST0001_UUID added to pool lustre.test_56xg_1 Creating new pool oleg329-server: Pool lustre.test_56xg_2 created Adding targets to pool oleg329-server: OST lustre-OST0000_UUID added to pool lustre.test_56xg_2 oleg329-server: OST lustre-OST0001_UUID added to pool lustre.test_56xg_2 1. migrate f56xg.sanity on pool test_56xg_0 2. migrate f56xg.sanity on pool test_56xg_2 3. migrate f56xg.sanity on pool test_56xg_1 4. migrate f56xg.sanity on pool test_56xg_2 with default stripe parameters Destroy the created pools: test_56xg_0,test_56xg_1,test_56xg_2 lustre.test_56xg_0 oleg329-server: OST lustre-OST0000_UUID removed from pool lustre.test_56xg_0 oleg329-server: Pool lustre.test_56xg_0 destroyed lustre.test_56xg_1 oleg329-server: OST lustre-OST0001_UUID removed from pool lustre.test_56xg_1 oleg329-server: Pool lustre.test_56xg_1 destroyed lustre.test_56xg_2 oleg329-server: OST lustre-OST0000_UUID removed from pool lustre.test_56xg_2 oleg329-server: OST lustre-OST0001_UUID removed from pool lustre.test_56xg_2 oleg329-server: Pool lustre.test_56xg_2 destroyed PASS 56xg (40s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 56xh: lfs migrate bandwidth limitation support ========================================================== 21:20:34 (1713489634) 25+0 records in 25+0 records out 26214400 bytes (26 MB) copied, 0.22808 s, 115 MB/s 25M -rw-r--r-- 1 root root 25M Apr 18 21:20 /tmp/f56xh.sanity.tmp /mnt/lustre/f56xh.sanity: - { seconds: 4, rmbps: 1, wmbps: 1, copied: 4, size: 25, pct: 16% } - { seconds: 8, rmbps: 1, wmbps: 1, copied: 8, size: 25, pct: 32% } - { seconds: 12, rmbps: 1, wmbps: 1, copied: 12, size: 25, pct: 48% } - { seconds: 16, rmbps: 1, wmbps: 1, copied: 16, size: 25, pct: 64% } - { seconds: 20, rmbps: 1, wmbps: 1, copied: 20, size: 25, pct: 80% } - { seconds: 24, rmbps: 1, wmbps: 1, copied: 24, size: 25, pct: 96% } - { seconds: 25, rmbps: 1, wmbps: 1, copied: 25, size: 25, pct: 100% } PASS 56xh (29s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 56xi: lfs migrate stats support =========== 21:21:06 (1713489666) 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.0691538 s, 75.8 MB/s 5.0M -rw-r--r-- 1 root root 5.0M Apr 18 21:21 /tmp/f56xi.sanity.tmp /mnt/lustre/f56xi.sanity.1: - { seconds: 0, rmbps: 26, wmbps: 26, copied: 5, size: 5, pct: 100% } /mnt/lustre/f56xi.sanity.2: - { seconds: 0, rmbps: 17, wmbps: 17, copied: 5, size: 5, pct: 100% } /mnt/lustre/f56xi.sanity.3: - { seconds: 0, rmbps: 20, wmbps: 20, copied: 5, size: 5, pct: 100% } PASS 56xi (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 56xj: lfs migrate -b should not cause starvation of threads on OSS ========================================================== 21:21:13 (1713489673) striped dir -i0 -c2 -H fnv_1a_64 /mnt/lustre/d56xj.sanity 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 0.389047 s, 27.0 MB/s create 174 hard links of /mnt/lustre/f56xj.sanity total: 174 link in 0.92 seconds: 189.18 ops/second lfs migrate: cannot get group lock: Resource temporarily unavailable (11) error: lfs migrate: /mnt/lustre/d56xj.sanity/link9: cannot get group lock: Resource temporarily unavailable lfs migrate: cannot get group lock: Resource temporarily unavailable (11) error: lfs migrate: /mnt/lustre/d56xj.sanity/link1: cannot get group lock: Resource temporarily unavailable lfs migrate: cannot get group lock: Resource temporarily unavailable (11) error: lfs migrate: /mnt/lustre/d56xj.sanity/link106: cannot get group lock: Resource temporarily unavailable lfs migrate: cannot get group lock: Resource temporarily unavailable (11) error: lfs migrate: /mnt/lustre/d56xj.sanity/link14: cannot get group lock: Resource temporarily unavailable lfs migrate: cannot get group lock: Resource temporarily unavailable (11) error: lfs migrate: /mnt/lustre/d56xj.sanity/link16: cannot get group lock: Resource temporarily unavailable lfs migrate: cannot get group lock: Resource temporarily unavailable (11) error: lfs migrate: /mnt/lustre/d56xj.sanity/link2: cannot get group lock: Resource temporarily unavailable lfs migrate: cannot get group lock: Resource temporarily unavailable (11) error: lfs migrate: /mnt/lustre/d56xj.sanity/link35: cannot get group lock: Resource temporarily unavailable lfs migrate: cannot get group lock: Resource temporarily unavailable (11) error: lfs migrate: /mnt/lustre/d56xj.sanity/link5: cannot get group lock: Resource temporarily unavailable lfs migrate: cannot get group lock: Resource temporarily unavailable (11) error: lfs migrate: /mnt/lustre/d56xj.sanity/link47: cannot get group lock: Resource temporarily unavailable lfs migrate: cannot get group lock: Resource temporarily unavailable (11) error: lfs migrate: /mnt/lustre/d56xj.sanity/link6: cannot get group lock: Resource temporarily unavailable lfs migrate: cannot get group lock: Resource temporarily unavailable (11) error: lfs migrate: /mnt/lustre/d56xj.sanity/link0: cannot get group lock: Resource temporarily unavailable lfs migrate: cannot get group lock: Resource temporarily unavailable (11) error: lfs migrate: /mnt/lustre/d56xj.sanity/link7: cannot get group lock: Resource temporarily unavailable lfs migrate: cannot get group lock: Resource temporarily unavailable (11) error: lfs migrate: /mnt/lustre/d56xj.sanity/link67: cannot get group lock: Resource temporarily unavailable lfs migrate: cannot get group lock: Resource temporarily unavailable (11) error: lfs migrate: /mnt/lustre/d56xj.sanity/link74: cannot get group lock: Resource temporarily unavailable lfs migrate: cannot get group lock: Resource temporarily unavailable (11) error: lfs migrate: /mnt/lustre/d56xj.sanity/link8: cannot get group lock: Resource temporarily unavailable lfs migrate: cannot get group lock: Resource temporarily unavailable (11) error: lfs migrate: /mnt/lustre/d56xj.sanity/link10: cannot get group lock: Resource temporarily unavailable lfs migrate: cannot get group lock: Resource temporarily unavailable (11) error: lfs migrate: /mnt/lustre/d56xj.sanity/link78: cannot get group lock: Resource temporarily unavailable lfs migrate: cannot get group lock: Resource temporarily unavailable (11) error: lfs migrate: /mnt/lustre/d56xj.sanity/link11: cannot get group lock: Resource temporarily unavailable lfs migrate: cannot get group lock: Resource temporarily unavailable (11) error: lfs migrate: /mnt/lustre/d56xj.sanity/link92: cannot get group lock: Resource temporarily unavailable lfs migrate: cannot get group lock: Resource temporarily unavailable (11) error: lfs migrate: /mnt/lustre/d56xj.sanity/link94: cannot get group lock: Resource temporarily unavailable lfs migrate: cannot get group lock: Resource temporarily unavailable (11) lfs migrate: cannot get group lock: Resource temporarily unavailable (11) error: lfs migrate: /mnt/lustre/d56xj.sanity/link13: cannot get group lock: Resource temporarily unavailable lfs migrate: cannot get group lockerror: lfs migrate: /mnt/lustre/d56xj.sanity/link99: cannot get group lock: Resource temporarily unavailable : Resource temporarily unavailable (11) error: lfs migrate: /mnt/lustre/d56xj.sanity/link15: cannot get group lock: Resource temporarily unavailable lfs migrate: cannot get group locklfs migrate: : Resource temporarily unavailable (11) cannot get group lock: Resource temporarily unavailable (11) error: lfs migrate: /mnt/lustre/d56xj.sanity/link118: cannot get group lock: Resource temporarily unavailable error: lfs migrate: /mnt/lustre/d56xj.sanity/link121: cannot get group lock: Resource temporarily unavailable lfs migrate: cannot get group lock: Resource temporarily unavailable (11) error: lfs migrate: /mnt/lustre/d56xj.sanity/link123: cannot get group lock: Resource temporarily unavailable lfs migrate: cannot get group lock: Resource temporarily unavailable (11) error: lfs migrate: /mnt/lustre/d56xj.sanity/link132: cannot get group lock: Resource temporarily unavailable lfs migrate: cannot get group lock: Resource temporarily unavailable (11) error: lfs migrate: /mnt/lustre/d56xj.sanity/link18: cannot get group lock: Resource temporarily unavailable lfs migrate: cannot get group lock: Resource temporarily unavailable (11) error: lfs migrate: /mnt/lustre/d56xj.sanity/link141: cannot get group lock: Resource temporarily unavailable lfs migrate: cannot get group lock: Resource temporarily unavailable (11) error: lfs migrate: /mnt/lustre/d56xj.sanity/link142: cannot get group lock: Resource temporarily unavailable lfs migrate: cannot get group lock: Resource temporarily unavailable (11) error: lfs migrate: /mnt/lustre/d56xj.sanity/link19: cannot get group lock: Resource temporarily unavailable lfs migrate: cannot get group lock: Resource temporarily unavailable (11) error: lfs migrate: /mnt/lustre/d56xj.sanity/link20: cannot get group lock: Resource temporarily unavailable lfs migrate: cannot get group lock: Resource temporarily unavailable (11) error: lfs migrate: /mnt/lustre/d56xj.sanity/link21: cannot get group lock: Resource temporarily unavailable lfs migrate: cannot get group lock: Resource temporarily unavailable (11) error: lfs migrate: /mnt/lustre/d56xj.sanity/link155: cannot get group lock: Resource temporarily unavailable lfs migrate: cannot get group lock: Resource temporarily unavailable (11) error: lfs migrate: /mnt/lustre/d56xj.sanity/link23: cannot get group lock: Resource temporarily unavailable lfs migrate: cannot get group lock: Resource temporarily unavailable (11) lfs migrate: cannot get group lock: Resource temporarily unavailable (11) error: lfs migrate: /mnt/lustre/d56xj.sanity/link25: cannot get group lock: Resource temporarily unavailable error: lfs migrate: /mnt/lustre/d56xj.sanity/link3: cannot get group lock: Resource temporarily unavailable lfs migrate: cannot get group lock: Resource temporarily unavailable (11) lfs migrate: cannot get group lock: Resource temporarily unavailable (11) error: lfs migrate: /mnt/lustre/d56xj.sanity/link26: cannot get group lock: Resource temporarily unavailable error: lfs migrate: /mnt/lustre/d56xj.sanity/link24: cannot get group lock: Resource temporarily unavailable lfs migrate: cannot get group lock: Resource temporarily unavailable (11) error: lfs migrate: /mnt/lustre/d56xj.sanity/link27: cannot get group lock: Resource temporarily unavailable lfs migrate: cannot get group lock: Resource temporarily unavailable (11) error: lfs migrate: /mnt/lustre/d56xj.sanity/link29: cannot get group lock: Resource temporarily unavailable lfs migrate: cannot get group lock: Resource temporarily unavailable (11) error: lfs migrate: /mnt/lustre/d56xj.sanity/link32: cannot get group lock: Resource temporarily unavailable lfs migrate: cannot get group lock: Resource temporarily unavailable (11) error: lfs migrate: /mnt/lustre/d56xj.sanity/link30: cannot get group lock: Resource temporarily unavailable lfs migrate: cannot get group lock: Resource temporarily unavailable (11) lfs migrate: cannot get group lock: Resource temporarily unavailable (11) lfs migrate: cannot get group lock: Resource temporarily unavailable (11) error: lfs migrate: /mnt/lustre/d56xj.sanity/link31: cannot get group lock: Resource temporarily unavailable error: lfs migrate: /mnt/lustre/d56xj.sanity/link28: cannot get group lock: Resource temporarily unavailable error: lfs migrate: /mnt/lustre/d56xj.sanity/link36: cannot get group lock: Resource temporarily unavailable lfs migrate: cannot get group lock: Resource temporarily unavailable (11) error: lfs migrate: /mnt/lustre/d56xj.sanity/link38: cannot get group lock: Resource temporarily unavailable lfs migrate: cannot get group lock: Resource temporarily unavailable (11) lfs migrate: cannot get group lock: Resource temporarily unavailable (11) error: lfs migrate: /mnt/lustre/d56xj.sanity/link39: cannot get group lock: Resource temporarily unavailable error: lfs migrate: /mnt/lustre/d56xj.sanity/link40: cannot get group lock: Resource temporarily unavailable lfs migrate: cannot get group lock: Resource temporarily unavailable (11) error: lfs migrate: /mnt/lustre/d56xj.sanity/link42: cannot get group lock: Resource temporarily unavailable lfs migrate: cannot get group lock: Resource temporarily unavailable (11) error: lfs migrate: /mnt/lustre/d56xj.sanity/link41: cannot get group lock: Resource temporarily unavailable lfs migrate: cannot get group lock: Resource temporarily unavailable (11) error: lfs migrate: /mnt/lustre/d56xj.sanity/link45: cannot get group lock: Resource temporarily unavailable lfs migrate: cannot get group lock: Resource temporarily unavailable (11) lfs migrate: cannot get group lock: Resource temporarily unavailable (11) lfs migrate: cannot get group lock: Resource temporarily unavailable (11) error: lfs migrate: /mnt/lustre/d56xj.sanity/link43: cannot get group lock: Resource temporarily unavailable error: lfs migrate: /mnt/lustre/d56xj.sanity/link44: cannot get group lock: Resource temporarily unavailable error: lfs migrate: /mnt/lustre/d56xj.sanity/link49: cannot get group lock: Resource temporarily unavailable lfs migrate: cannot get group lock: Resource temporarily unavailable (11) lfs migrate: cannot get group lock: Resource temporarily unavailable (11) error: lfs migrate: /mnt/lustre/d56xj.sanity/link51: cannot get group lock: Resource temporarily unavailable error: lfs migrate: /mnt/lustre/d56xj.sanity/link50: cannot get group lock: Resource temporarily unavailable lfs migrate: cannot get group lock: Resource temporarily unavailable (11) lfs migrate: cannot get group lock: Resource temporarily unavailable (11) lfs migrate: cannot get group lock: Resource temporarily unavailable (11) error: lfs migrate: /mnt/lustre/d56xj.sanity/link52: cannot get group lock: Resource temporarily unavailable error: lfs migrate: /mnt/lustre/d56xj.sanity/link53: cannot get group lock: Resource temporarily unavailable error: lfs migrate: /mnt/lustre/d56xj.sanity/link48: cannot get group lock: Resource temporarily unavailable lfs migrate: cannot get group lock: Resource temporarily unavailable (11) lfs migrate: cannot get group lock: Resource temporarily unavailable (11) error: lfs migrate: /mnt/lustre/d56xj.sanity/link55: cannot get group lock: Resource temporarily unavailable lfs migrate: cannot get group lock: Resource temporarily unavailable (11) error: lfs migrate: /mnt/lustre/d56xj.sanity/link57: cannot get group lock: Resource temporarily unavailable error: lfs migrate: /mnt/lustre/d56xj.sanity/link56: cannot get group lock: Resource temporarily unavailable lfs migrate: cannot get group lock: Resource temporarily unavailable (11) lfs migrate: cannot get group lock: Resource temporarily unavailable (11) error: lfs migrate: /mnt/lustre/d56xj.sanity/link58: cannot get group lock: Resource temporarily unavailable error: lfs migrate: /mnt/lustre/d56xj.sanity/link59: cannot get group lock: Resource temporarily unavailable lfs migrate: cannot get group lock: Resource temporarily unavailable (11) lfs migrate: cannot get group lock: Resource temporarily unavailable (11) lfs migrate: cannot get group lock: Resource temporarily unavailable (11) error: lfs migrate: /mnt/lustre/d56xj.sanity/link62: cannot get group lock: Resource temporarily unavailable error: lfs migrate: /mnt/lustre/d56xj.sanity/link63: cannot get group lock: Resource temporarily unavailable error: lfs migrate: /mnt/lustre/d56xj.sanity/link60: cannot get group lock: Resource temporarily unavailable lfs migrate: cannot get group lock: Resource temporarily unavailable (11) lfs migrate: cannot get group lock: Resource temporarily unavailable (11) error: lfs migrate: /mnt/lustre/d56xj.sanity/link64: cannot get group lock: Resource temporarily unavailable error: lfs migrate: /mnt/lustre/d56xj.sanity/link65: cannot get group lock: Resource temporarily unavailable lfs migrate: cannot get group lock: Resource temporarily unavailable (11) error: lfs migrate: /mnt/lustre/d56xj.sanity/link69: cannot get group lock: Resource temporarily unavailable lfs migrate: cannot get group lock: Resource temporarily unavailable (11) error: lfs migrate: /mnt/lustre/d56xj.sanity/link66: cannot get group lock: Resource temporarily unavailable lfs migrate: cannot get group lock: Resource temporarily unavailable (11) error: lfs migrate: /mnt/lustre/d56xj.sanity/link70: cannot get group lock: Resource temporarily unavailable lfs migrate: lfs migrate: cannot get group lockcannot get group lock: Resource temporarily unavailable (11) : Resource temporarily unavailable (11) lfs migrate: cannot get group lock: Resource temporarily unavailable (11) error: lfs migrate: /mnt/lustre/d56xj.sanity/link72: cannot get group lock: Resource temporarily unavailable error: lfs migrate: /mnt/lustre/d56xj.sanity/link76: cannot get group lock: Resource temporarily unavailable error: lfs migrate: /mnt/lustre/d56xj.sanity/link68: cannot get group lock: Resource temporarily unavailable lfs migrate: cannot get group lock: Resource temporarily unavailable (11) lfs migrate: cannot get group lock: Resource temporarily unavailable (11) error: lfs migrate: /mnt/lustre/d56xj.sanity/link75: cannot get group lock: Resource temporarily unavailable lfs migrate: cannot get group lock: Resource temporarily unavailable (11) error: lfs migrate: /mnt/lustre/d56xj.sanity/link77: cannot get group lock: Resource temporarily unavailable error: lfs migrate: /mnt/lustre/d56xj.sanity/link79: cannot get group lock: Resource temporarily unavailable lfs migrate: cannot get group lock: Resource temporarily unavailable (11) lfs migrate: cannot get group lock: Resource temporarily unavailable (11) error: lfs migrate: /mnt/lustre/d56xj.sanity/link81: cannot get group lock: Resource temporarily unavailable error: lfs migrate: /mnt/lustre/d56xj.sanity/link73: cannot get group lock: Resource temporarily unavailable lfs migrate: cannot get group lock: Resource temporarily unavailable (11) lfs migrate: cannot get group lock: Resource temporarily unavailable (11) error: lfs migrate: /mnt/lustre/d56xj.sanity/link83: cannot get group lock: Resource temporarily unavailable lfs migrate: cannot get group lock: Resource temporarily unavailable (11) error: lfs migrate: /mnt/lustre/d56xj.sanity/link82: cannot get group lock: Resource temporarily unavailable error: lfs migrate: /mnt/lustre/d56xj.sanity/link71: cannot get group lock: Resource temporarily unavailable lfs migrate: cannot get group lock: Resource temporarily unavailable (11) error: lfs migrate: /mnt/lustre/d56xj.sanity/link85: cannot get group lock: Resource temporarily unavailable lfs migrate: cannot get group lock: Resource temporarily unavailable (11) lfs migrate: cannot get group lock: Resource temporarily unavailable (11) error: lfs migrate: /mnt/lustre/d56xj.sanity/link88: cannot get group lock: Resource temporarily unavailable lfs migrate: cannot get group lock: Resource temporarily unavailable (11) lfs migrate: cannot get group lock: Resource temporarily unavailable (11) error: lfs migrate: /mnt/lustre/d56xj.sanity/link86: cannot get group lock: Resource temporarily unavailable error: lfs migrate: /mnt/lustre/d56xj.sanity/link84: cannot get group lock: Resource temporarily unavailable lfs migrate: cannot get group lock: Resource temporarily unavailable (11) error: lfs migrate: /mnt/lustre/d56xj.sanity/link87: cannot get group lock: Resource temporarily unavailable error: lfs migrate: /mnt/lustre/d56xj.sanity/link91: cannot get group lock: Resource temporarily unavailable lfs migrate: cannot get group lock: Resource temporarily unavailable (11) lfs migrate: cannot get group lock: Resource temporarily unavailable (11) lfs migrate: cannot get group lock: Resource temporarily unavailable (11) error: lfs migrate: /mnt/lustre/d56xj.sanity/link80: cannot get group lock: Resource temporarily unavailable error: lfs migrate: /mnt/lustre/d56xj.sanity/link90: cannot get group lock: Resource temporarily unavailable error: lfs migrate: /mnt/lustre/d56xj.sanity/link93: cannot get group lock: Resource temporarily unavailable lfs migrate: cannot get group lock: Resource temporarily unavailable (11) lfs migrate: cannot get group lock: Resource temporarily unavailable (11) error: lfs migrate: /mnt/lustre/d56xj.sanity/link89: cannot get group lock: Resource temporarily unavailable error: lfs migrate: /mnt/lustre/d56xj.sanity/link95: cannot get group lock: Resource temporarily unavailable lfs migrate: cannot get group lock: Resource temporarily unavailable (11) error: lfs migrate: /mnt/lustre/d56xj.sanity/link97: cannot get group lock: Resource temporarily unavailable lfs migrate: cannot get group lock: Resource temporarily unavailable (11) error: lfs migrate: /mnt/lustre/d56xj.sanity/link96: cannot get group lock: Resource temporarily unavailable lfs migrate: cannot get group lock: Resource temporarily unavailable (11) error: lfs migrate: /mnt/lustre/d56xj.sanity/link100: cannot get group lock: Resource temporarily unavailable lfs migrate: cannot get group lock: Resource temporarily unavailable (11) lfs migrate: cannot get group lock: Resource temporarily unavailable (11) error: lfs migrate: /mnt/lustre/d56xj.sanity/link107: cannot get group lock: Resource temporarily unavailable lfs migrate: cannot get group lock: Resource temporarily unavailable (11) lfs migrate: cannot get group lock: Resource temporarily unavailable (11) lfs migrate: cannot get group lock: Resource temporarily unavailable (11) error: lfs migrate: /mnt/lustre/d56xj.sanity/link12: cannot get group lock: Resource temporarily unavailable lfs migrate: cannot get group lock: Resource temporarily unavailable (11) lfs migrate: cannot get group lock: Resource temporarily unavailable (11) error: lfs migrate: /mnt/lustre/d56xj.sanity/link103: cannot get group lock: Resource temporarily unavailable error: lfs migrate: /mnt/lustre/d56xj.sanity/link104: cannot get group lock: Resource temporarily unavailable error: lfs migrate: /mnt/lustre/d56xj.sanity/link108: cannot get group lock: Resource temporarily unavailable error: lfs migrate: /mnt/lustre/d56xj.sanity/link101: cannot get group lock: Resource temporarily unavailable lfs migrate: cannot get group lock: Resource temporarily unavailable (11) lfs migrate: cannot get group lock: Resource temporarily unavailable (11) error: lfs migrate: /mnt/lustre/d56xj.sanity/link102: cannot get group lock: Resource temporarily unavailable error: lfs migrate: /mnt/lustre/d56xj.sanity/link109: cannot get group lock: Resource temporarily unavailable error: lfs migrate: /mnt/lustre/d56xj.sanity/link110: cannot get group lock: Resource temporarily unavailable lfs migrate: cannot get group lock: Resource temporarily unavailable (11) lfs migrate: cannot get group lock: Resource temporarily unavailable (11) lfs migrate: cannot get group lock: Resource temporarily unavailable (11) error: lfs migrate: /mnt/lustre/d56xj.sanity/link114: cannot get group lock: Resource temporarily unavailable error: lfs migrate: /mnt/lustre/d56xj.sanity/link105: cannot get group lock: Resource temporarily unavailable error: lfs migrate: /mnt/lustre/d56xj.sanity/link113: cannot get group lock: Resource temporarily unavailable lfs migrate: cannot get group lock: Resource temporarily unavailable (11) lfs migrate: cannot get group lock: Resource temporarily unavailable (11) error: lfs migrate: /mnt/lustre/d56xj.sanity/link117: cannot get group lock: Resource temporarily unavailable lfs migrate: cannot get group lock: Resource temporarily unavailable (11) error: lfs migrate: /mnt/lustre/d56xj.sanity/link115: cannot get group lock: Resource temporarily unavailable error: lfs migrate: /mnt/lustre/d56xj.sanity/link116: cannot get group lock: Resource temporarily unavailable lfs migrate: cannot get group lock: Resource temporarily unavailable (11) error: lfs migrate: /mnt/lustre/d56xj.sanity/link120: cannot get group lock: Resource temporarily unavailable lfs migrate: cannot get group lock: Resource temporarily unavailable (11) lfs migrate: cannot get group lock: Resource temporarily unavailable (11) error: lfs migrate: /mnt/lustre/d56xj.sanity/link119: cannot get group lock: Resource temporarily unavailable error: lfs migrate: /mnt/lustre/d56xj.sanity/link122: cannot get group lock: Resource temporarily unavailable lfs migrate: cannot get group lock: Resource temporarily unavailable (11) lfs migrate: cannot get group lock: Resource temporarily unavailable (11) error: lfs migrate: /mnt/lustre/d56xj.sanity/link127: cannot get group lock: Resource temporarily unavailable error: lfs migrate: /mnt/lustre/d56xj.sanity/link128: cannot get group lock: Resource temporarily unavailable lfs migrate: cannot get group lock: Resource temporarily unavailable (11) lfs migrate: cannot get group lock: Resource temporarily unavailable (11) error: lfs migrate: /mnt/lustre/d56xj.sanity/link130: cannot get group lock: Resource temporarily unavailable lfs migrate: cannot get group lock: Resource temporarily unavailable (11) error: lfs migrate: /mnt/lustre/d56xj.sanity/link124: cannot get group lock: Resource temporarily unavailable error: lfs migrate: /mnt/lustre/d56xj.sanity/link129: cannot get group lock: Resource temporarily unavailable lfs migrate: cannot get group lock: Resource temporarily unavailable (11) error: lfs migrate: /mnt/lustre/d56xj.sanity/link131: cannot get group lock: Resource temporarily unavailable lfs migrate: cannot get group lock: Resource temporarily unavailable (11) error: lfs migrate: /mnt/lustre/d56xj.sanity/link133: cannot get group lock: Resource temporarily unavailable lfs migrate: cannot get group lock: Resource temporarily unavailable (11) lfs migrate: cannot get group lock: Resource temporarily unavailable (11) error: lfs migrate: /mnt/lustre/d56xj.sanity/link135: cannot get group lock: Resource temporarily unavailable error: lfs migrate: /mnt/lustre/d56xj.sanity/link134: cannot get group lock: Resource temporarily unavailable lfs migrate: cannot get group lock: Resource temporarily unavailable (11) error: lfs migrate: /mnt/lustre/d56xj.sanity/link136: cannot get group lock: Resource temporarily unavailable lfs migrate: cannot get group lock: Resource temporarily unavailable (11) error: lfs migrate: /mnt/lustre/d56xj.sanity/link137: cannot get group lock: Resource temporarily unavailable lfs migrate: cannot get group lock: Resource temporarily unavailable (11) error: lfs migrate: /mnt/lustre/d56xj.sanity/link138: cannot get group lock: Resource temporarily unavailable lfs migrate: cannot get group lock: Resource temporarily unavailable (11) error: lfs migrate: /mnt/lustre/d56xj.sanity/link139: cannot get group lock: Resource temporarily unavailable lfs migrate: cannot get group lock: Resource temporarily unavailable (11) lfs migrate: cannot get group lock: Resource temporarily unavailable (11) error: lfs migrate: /mnt/lustre/d56xj.sanity/link145: cannot get group lock: Resource temporarily unavailable error: lfs migrate: /mnt/lustre/d56xj.sanity/link144: cannot get group lock: Resource temporarily unavailable lfs migrate: cannot get group lock: Resource temporarily unavailable (11) lfs migrate: cannot get group lock: Resource temporarily unavailable (11) error: lfs migrate: /mnt/lustre/d56xj.sanity/link143: cannot get group lock: Resource temporarily unavailable error: lfs migrate: /mnt/lustre/d56xj.sanity/link146: cannot get group lock: Resource temporarily unavailable lfs migrate: cannot get group lock: Resource temporarily unavailable (11) error: lfs migrate: /mnt/lustre/d56xj.sanity/link147: cannot get group lock: Resource temporarily unavailable lfs migrate: cannot get group lock: Resource temporarily unavailable (11) lfs migrate: cannot get group lock: Resource temporarily unavailable (11) lfs migrate: cannot get group lock: Resource temporarily unavailable (11) error: lfs migrate: /mnt/lustre/d56xj.sanity/link150: cannot get group lock: Resource temporarily unavailable lfs migrate: cannot get group lock: Resource temporarily unavailable (11) error: lfs migrate: /mnt/lustre/d56xj.sanity/link140: cannot get group lock: Resource temporarily unavailable error: lfs migrate: /mnt/lustre/d56xj.sanity/link148: cannot get group lock: Resource temporarily unavailable error: lfs migrate: /mnt/lustre/d56xj.sanity/link149: cannot get group lock: Resource temporarily unavailable lfs migrate: cannot get group lock: Resource temporarily unavailable (11) error: lfs migrate: /mnt/lustre/d56xj.sanity/link151: cannot get group lock: Resource temporarily unavailable lfs migrate: cannot get group lock: Resource temporarily unavailable (11) lfs migrate: cannot get group lock: Resource temporarily unavailable (11) error: lfs migrate: /mnt/lustre/d56xj.sanity/link152: cannot get group lock: Resource temporarily unavailable error: lfs migrate: /mnt/lustre/d56xj.sanity/link153: cannot get group lock: Resource temporarily unavailable lfs migrate: cannot get group lock: Resource temporarily unavailable (11) lfs migrate: cannot get group lock: Resource temporarily unavailable (11) error: lfs migrate: /mnt/lustre/d56xj.sanity/link154: cannot get group lock: Resource temporarily unavailable lfs migrate: cannot get group lock: Resource temporarily unavailable (11) error: lfs migrate: /mnt/lustre/d56xj.sanity/link156: cannot get group lock: Resource temporarily unavailable error: lfs migrate: /mnt/lustre/d56xj.sanity/link157: cannot get group lock: Resource temporarily unavailable lfs migrate: cannot get group lock: Resource temporarily unavailable (11) lfs migrate: cannot get group lock: Resource temporarily unavailable (11) error: lfs migrate: /mnt/lustre/d56xj.sanity/link158: cannot get group lock: Resource temporarily unavailable lfs migrate: cannot get group lock: Resource temporarily unavailable (11) error: lfs migrate: /mnt/lustre/d56xj.sanity/link159: cannot get group lock: Resource temporarily unavailable lfs migrate: cannot get group lock: Resource temporarily unavailable (11) error: lfs migrate: /mnt/lustre/d56xj.sanity/link162: cannot get group lock: Resource temporarily unavailable error: lfs migrate: /mnt/lustre/d56xj.sanity/link160: cannot get group lock: Resource temporarily unavailable lfs migrate: cannot get group lock: Resource temporarily unavailable (11) error: lfs migrate: /mnt/lustre/d56xj.sanity/link161: cannot get group lock: Resource temporarily unavailable lfs migrate: cannot get group lock: Resource temporarily unavailable (11) error: lfs migrate: /mnt/lustre/d56xj.sanity/link163: cannot get group lock: Resource temporarily unavailable lfs migrate: cannot get group lock: Resource temporarily unavailable (11) error: lfs migrate: /mnt/lustre/d56xj.sanity/link164: cannot get group lock: Resource temporarily unavailable lfs migrate: cannot get group lock: Resource temporarily unavailable (11) error: lfs migrate: /mnt/lustre/d56xj.sanity/link165: cannot get group lock: Resource temporarily unavailable lfs migrate: cannot get group lock: Resource temporarily unavailable (11) lfs migrate: cannot get group lock: Resource temporarily unavailable (11) error: lfs migrate: /mnt/lustre/d56xj.sanity/link167: cannot get group lock: Resource temporarily unavailable error: lfs migrate: /mnt/lustre/d56xj.sanity/link166: cannot get group lock: Resource temporarily unavailable lfs migrate: cannot get group lock: Resource temporarily unavailable (11) lfs migrate: cannot get group lock: Resource temporarily unavailable (11) error: lfs migrate: /mnt/lustre/d56xj.sanity/link170: cannot get group lock: Resource temporarily unavailable error: lfs migrate: /mnt/lustre/d56xj.sanity/link168: cannot get group lock: Resource temporarily unavailable lfs migrate: cannot get group lock: Resource temporarily unavailable (11) lfs migrate: cannot get group lock: Resource temporarily unavailable (11) error: lfs migrate: /mnt/lustre/d56xj.sanity/link171: cannot get group lock: Resource temporarily unavailable error: lfs migrate: /mnt/lustre/d56xj.sanity/link169: cannot get group lock: Resource temporarily unavailable lfs migrate: cannot get group lock: Resource temporarily unavailable (11) lfs migrate: cannot get group lock: Resource temporarily unavailable (11) error: lfs migrate: /mnt/lustre/d56xj.sanity/link173: cannot get group lock: Resource temporarily unavailable error: lfs migrate: /mnt/lustre/d56xj.sanity/link172: cannot get group lock: Resource temporarily unavailable lfs migrate: cannot get group lock: Resource temporarily unavailable (11) error: lfs migrate: /mnt/lustre/d56xj.sanity/link33: cannot get group lock: Resource temporarily unavailable lfs migrate: cannot get group lock: Resource temporarily unavailable (11) error: lfs migrate: /mnt/lustre/d56xj.sanity/link34: cannot get group lock: Resource temporarily unavailable lfs migrate: cannot get group lock: Resource temporarily unavailable (11) lfs migrate: cannot get group lock: Resource temporarily unavailable (11) error: lfs migrate: /mnt/lustre/d56xj.sanity/link54: cannot get group lock: Resource temporarily unavailable error: lfs migrate: /mnt/lustre/d56xj.sanity/link46: cannot get group lock: Resource temporarily unavailable lfs migrate: cannot get group lock: Resource temporarily unavailable (11) lfs migrate: cannot get group lock: Resource temporarily unavailable (11) lfs migrate: cannot get group lock: Resource temporarily unavailable (11) error: lfs migrate: /mnt/lustre/d56xj.sanity/link37: cannot get group lock: Resource temporarily unavailable error: lfs migrate: /mnt/lustre/d56xj.sanity/link112: cannot get group lock: Resource temporarily unavailable error: lfs migrate: /mnt/lustre/d56xj.sanity/link61: cannot get group lock: Resource temporarily unavailable lfs migrate: cannot get group lock: Resource temporarily unavailable (11) error: lfs migrate: /mnt/lustre/d56xj.sanity/link111: cannot get group lock: Resource temporarily unavailable lfs migrate: cannot get group lock: Resource temporarily unavailable (11) lfs migrate: cannot get group lock: Resource temporarily unavailable (11) error: lfs migrate: /mnt/lustre/d56xj.sanity/link126: cannot get group lock: Resource temporarily unavailable error: lfs migrate: /mnt/lustre/d56xj.sanity/link125: cannot get group lock: Resource temporarily unavailable PASS 56xj (11s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 56xk: lfs mirror resync bandwidth limitation support ========================================================== 21:21:26 (1713489686) 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.153621 s, 34.1 MB/s 1+0 records in 1+0 records out 4096 bytes (4.1 kB) copied, 0.0169943 s, 241 kB/s lcme_flags: init,stale - { seconds: 1, rmbps: 1, wmbps: 1, copied: 1, size: 5, pct: 20% } - { seconds: 2, rmbps: 1, wmbps: 1, copied: 2, size: 5, pct: 40% } - { seconds: 3, rmbps: 1, wmbps: 1, copied: 3, size: 5, pct: 60% } - { seconds: 4, rmbps: 1, wmbps: 1, copied: 4, size: 5, pct: 80% } - { seconds: 5, rmbps: 1, wmbps: 1, copied: 5, size: 5, pct: 100% } - { seconds: 5, rmbps: 1, wmbps: 1, copied: 5, size: 5, pct: 100% } PASS 56xk (8s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 56xl: lfs mirror resync stats support ===== 21:21:36 (1713489696) 5+0 records in 5+0 records out 5242880 bytes (5.2 MB) copied, 0.149067 s, 35.2 MB/s 1+0 records in 1+0 records out 4096 bytes (4.1 kB) copied, 0.0148312 s, 276 kB/s lcme_flags: init,stale /mnt/lustre/f56xl.sanity.1 lcm_layout_gen: 2 lcm_mirror_count: 2 lcm_entry_count: 2 lcme_id: 65537 lcme_mirror_id: 1 lcme_flags: init,stale lcme_extent.e_start: 0 lcme_extent.e_end: EOF lmm_stripe_count: 1 lmm_stripe_size: 4194304 lmm_pattern: raid0 lmm_layout_gen: 0 lmm_stripe_offset: 1 lmm_objects: - 0: { l_ost_idx: 1, l_fid: [0x2c0000403:0x4fea:0x0] } lcme_id: 131073 lcme_mirror_id: 2 lcme_flags: init lcme_extent.e_start: 0 lcme_extent.e_end: EOF lmm_stripe_count: 1 lmm_stripe_size: 4194304 lmm_pattern: raid0 lmm_layout_gen: 0 lmm_stripe_offset: 0 lmm_objects: - 0: { l_ost_idx: 0, l_fid: [0x280000bd1:0x4fe8:0x0] } - { seconds: 0, rmbps: 24, wmbps: 24, copied: 5, size: 5, pct: 100% } PASS 56xl (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 56y: lfs find -L raid0|released =========== 21:21:42 (1713489702) striped dir -i0 -c2 -H crush /mnt/lustre/d56y.sanity PASS 56y (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 56z: lfs find should continue after an error ========================================================== 21:21:46 (1713489706) striped dir -i0 -c2 -H all_char /mnt/lustre/d56z.sanity striped dir -i0 -c2 -H all_char /mnt/lustre/d56z.sanity/d0 striped dir -i0 -c2 -H fnv_1a_64 /mnt/lustre/d56z.sanity/d1 striped dir -i0 -c2 -H all_char /mnt/lustre/d56z.sanity/d2 striped dir -i0 -c2 -H crush /mnt/lustre/d56z.sanity/d3 striped dir -i0 -c2 -H fnv_1a_64 /mnt/lustre/d56z.sanity/d4 striped dir -i0 -c2 -H crush2 /mnt/lustre/d56z.sanity/d5 striped dir -i0 -c2 -H fnv_1a_64 /mnt/lustre/d56z.sanity/d6 striped dir -i0 -c2 -H all_char /mnt/lustre/d56z.sanity/d7 striped dir -i0 -c2 -H crush2 /mnt/lustre/d56z.sanity/d8 striped dir -i0 -c2 -H fnv_1a_64 /mnt/lustre/d56z.sanity/d9 lfs: failed for '/mnt/lustre/non_existent_dir': No such file or directory /mnt/lustre/d56z.sanity /mnt/lustre/d56z.sanity/d7 /mnt/lustre/d56z.sanity/d7/f56z.sanity /mnt/lustre/d56z.sanity/d2 /mnt/lustre/d56z.sanity/d2/f56z.sanity /mnt/lustre/d56z.sanity/d1 /mnt/lustre/d56z.sanity/d1/f56z.sanity /mnt/lustre/d56z.sanity/d8 /mnt/lustre/d56z.sanity/d8/f56z.sanity /mnt/lustre/d56z.sanity/d4 /mnt/lustre/d56z.sanity/d4/f56z.sanity /mnt/lustre/d56z.sanity/d6 /mnt/lustre/d56z.sanity/d6/f56z.sanity /mnt/lustre/d56z.sanity/d9 /mnt/lustre/d56z.sanity/d9/f56z.sanity /mnt/lustre/d56z.sanity/d5 /mnt/lustre/d56z.sanity/d5/f56z.sanity /mnt/lustre/d56z.sanity/d3 /mnt/lustre/d56z.sanity/d3/f56z.sanity /mnt/lustre/d56z.sanity/d0 /mnt/lustre/d56z.sanity/d0/f56z.sanity running as uid/gid/euid/egid 500/500/500/500, groups: [/home/green/git/lustre-release/lustre/utils/lfs] [find] [/mnt/lustre/non_existent] [/mnt/lustre/d56z.sanity] lfs: failed for '/mnt/lustre/non_existent': No such file or directory /mnt/lustre/d56z.sanity /mnt/lustre/d56z.sanity/d7 /mnt/lustre/d56z.sanity/d7/f56z.sanity /mnt/lustre/d56z.sanity/d2 /mnt/lustre/d56z.sanity/d2/f56z.sanity /mnt/lustre/d56z.sanity/d1 /mnt/lustre/d56z.sanity/d1/f56z.sanity /mnt/lustre/d56z.sanity/d8 /mnt/lustre/d56z.sanity/d8/f56z.sanity lfs find: llapi_semantic_traverse: Failed to open '/mnt/lustre/d56z.sanity/d4': Permission denied (13) /mnt/lustre/d56z.sanity/d6 /mnt/lustre/d56z.sanity/d6/f56z.sanity /mnt/lustre/d56z.sanity/d9 /mnt/lustre/d56z.sanity/d9/f56z.sanity /mnt/lustre/d56z.sanity/d5 /mnt/lustre/d56z.sanity/d5/f56z.sanity /mnt/lustre/d56z.sanity/d3 /mnt/lustre/d56z.sanity/d3/f56z.sanity /mnt/lustre/d56z.sanity/d0 /mnt/lustre/d56z.sanity/d0/f56z.sanity lfs: failed for '/mnt/lustre/d56z.sanity': Permission denied running as uid/gid/euid/egid 500/500/500/500, groups: [/home/green/git/lustre-release/lustre/utils/lfs] [find] [/mnt/lustre/non_existent] [/mnt/lustre/d56z.sanity] lfs: failed for '/mnt/lustre/non_existent': No such file or directory lfs find: llapi_semantic_traverse: Failed to open '/mnt/lustre/d56z.sanity/d4': Permission denied (13) lfs: failed for '/mnt/lustre/d56z.sanity': Permission denied PASS 56z (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 56aa: lfs find --size under striped dir === 21:21:51 (1713489711) total: 1024 open/close in 4.20 seconds: 243.89 ops/second PASS 56aa (16s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 56ab: lfs find --blocks =================== 21:22:09 (1713489729) striped dir -i0 -c2 -H crush /mnt/lustre/d56ab.sanity 1+0 records in 1+0 records out 8192 bytes (8.2 kB) copied, 0.00644673 s, 1.3 MB/s 1+0 records in 1+0 records out 4096 bytes (4.1 kB) copied, 0.00563468 s, 727 kB/s 2+0 records in 2+0 records out 2097152 bytes (2.1 MB) copied, 0.0969678 s, 21.6 MB/s total 2060 8 -rw-r--r-- 1 root root 16785408 Apr 18 21:22 f56ab.sanity.1 4 -rw-r--r-- 1 root root 16781312 Apr 18 21:22 f56ab.sanity.2 2048 -rw-r--r-- 1 root root 18874368 Apr 18 21:22 f56ab.sanity.3 PASS 56ab (4s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 56aca: check lfs find -perm with octal representation ========================================================== 21:22:16 (1713489736) striped dir -i0 -c2 -H all_char /mnt/lustre/d56aca.sanity PASS 56aca (12s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 56acb: check lfs find -perm with symbolic representation ========================================================== 21:22:31 (1713489751) striped dir -i0 -c2 -H fnv_1a_64 /mnt/lustre/d56acb.sanity PASS 56acb (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 56acc: check parsing error for lfs find -perm ========================================================== 21:22:36 (1713489756) striped dir -i0 -c2 -H crush2 /mnt/lustre/d56acc.sanity PASS 56acc (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 56ba: test lfs find --component-end, -start, -count, and -flags ========================================================== 21:22:41 (1713489761) striped dir -i0 -c2 -H all_char /mnt/lustre/d56ba.sanity/1Mfiles striped dir -i0 -c2 -H fnv_1a_64 /mnt/lustre/d56ba.sanity/1Mfiles/dir1 striped dir -i0 -c2 -H crush2 /mnt/lustre/d56ba.sanity/2Mfiles striped dir -i0 -c2 -H all_char /mnt/lustre/d56ba.sanity/2Mfiles/dir1 striped dir -i0 -c2 -H crush2 /mnt/lustre/d56ba.sanity/2Mfiles/dir2 PASS 56ba (5s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 56ca: check lfs find --mirror-count|-N and --mirror-state ========================================================== 21:22:48 (1713489768) total: 10 open/close in 0.10 seconds: 95.79 ops/second total: 10 open/close in 0.11 seconds: 90.35 ops/second total: 10 open/close in 0.10 seconds: 98.60 ops/second PASS 56ca (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 56da: test lfs find with long paths ======= 21:22:53 (1713489773) striped dir -i0 -c2 -H fnv_1a_64 /mnt/lustre/d56da.sanity striped dir -i0 -c2 -H all_char aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa striped dir -i0 -c2 -H crush2 aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa striped dir -i0 -c2 -H all_char aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa striped dir -i0 -c2 -H crush aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa striped dir -i0 -c2 -H crush aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa striped dir -i0 -c2 -H all_char aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa striped dir -i0 -c2 -H crush aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa striped dir -i0 -c2 -H all_char aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa striped dir -i0 -c2 -H fnv_1a_64 aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa striped dir -i0 -c2 -H crush aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa striped dir -i0 -c2 -H fnv_1a_64 aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa striped dir -i0 -c2 -H crush2 aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa striped dir -i0 -c2 -H crush aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa striped dir -i0 -c2 -H crush2 aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa striped dir -i0 -c2 -H crush aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa striped dir -i0 -c2 -H crush aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa PASS 56da (7s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 56ea: test lfs find -printf option ======== 21:23:02 (1713489782) Creating new pool oleg329-server: Pool lustre.test_56ea created Adding targets to pool oleg329-server: OST lustre-OST0000_UUID added to pool lustre.test_56ea oleg329-server: OST lustre-OST0001_UUID added to pool lustre.test_56ea lfs find: warning: unrecognized escape: '\Q' lfs find: warning: unrecognized format directive: '%Q' Destroy the created pools: test_56ea lustre.test_56ea oleg329-server: OST lustre-OST0000_UUID removed from pool lustre.test_56ea oleg329-server: OST lustre-OST0001_UUID removed from pool lustre.test_56ea oleg329-server: Pool lustre.test_56ea destroyed PASS 56ea (17s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 56eb: check lfs getstripe on symlink ====== 21:23:21 (1713489801) striped dir -i0 -c2 -H fnv_1a_64 /mnt/lustre/d56eb.sanity/subdir_1 /mnt/lustre/d56eb.sanity/link_1 stripe_count: 1 stripe_size: 4194304 pattern: 0 stripe_offset: -1 /mnt/lustre/d56eb.sanity/link_1 has no stripe info /mnt/lustre/d56eb.sanity/file_link_2 lmm_stripe_count: 1 /mnt/lustre/d56eb.sanity/file_link_2 has no stripe info PASS 56eb (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 56ec: check lfs getstripe,setstripe --hex --yaml ========================================================== 21:23:25 (1713489805) striped dir -i0 -c2 -H fnv_1a_64 /mnt/lustre/d56ec.sanity PASS 56ec (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 56eda: check lfs find --links ============= 21:23:30 (1713489810) striped dir -i0 -c2 -H crush /mnt/lustre/d56eda.sanity PASS 56eda (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 56edb: check lfs find --links for directory striped on multiple MDTs ========================================================== 21:23:35 (1713489815) striped dir -i0 -c2 -H crush /mnt/lustre/d56edb.sanity lmv_stripe_count: 2 lmv_stripe_offset: 1 lmv_hash_type: crush mdtidx FID[seq:oid:ver] 1 [0x2400032e0:0x52:0x0] 0 [0x200002342:0x51:0x0] PASS 56edb (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 56ef: lfs find with multiple paths ======== 21:23:39 (1713489819) striped dir -i0 -c2 -H fnv_1a_64 /mnt/lustre/d56ef.sanity PASS 56ef (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 56eg: lfs find -xattr ===================== 21:23:44 (1713489824) striped dir -i0 -c2 -H all_char /mnt/lustre/d56eg.sanity PASS 56eg (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 57a: verify MDS filesystem created with large inodes ============================================================ 21:23:49 (1713489829) oleg329-server: dumpe2fs 1.46.2.wc5 (26-Mar-2022) oleg329-server: dumpe2fs 1.46.2.wc5 (26-Mar-2022) PASS 57a (3s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 57b: default LOV EAs are stored inside large inodes ============================================================= 21:23:54 (1713489834) striped dir -i1 -c1 -H fnv_1a_64 /mnt/lustre/d57b.sanity mcreating 100 files total: 100 create in 0.46 seconds: 216.65 ops/second Filesystem 1K-blocks Used Available Use% Mounted on 192.168.203.129@tcp:/lustre 7666232 50616 7163200 1% /mnt/lustre opening files to create objects/EAs Filesystem 1K-blocks Used Available Use% Mounted on 192.168.203.129@tcp:/lustre 7666232 50616 7163200 1% /mnt/lustre PASS 57b (8s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 58: verify cross-platform wire constants ======================================================================== 21:24:04 (1713489844) wire constants OK PASS 58 (2s) debug_raw_pointers=0 debug_raw_pointers=0 debug_raw_pointers=Y debug_raw_pointers=Y == sanity test 59: verify cancellation of llog records async =================================================================== 21:24:08 (1713489848) touch 130 files total: 130 open/close in 0.80 seconds: 161.65 ops/second rm 130 files - unlinked 0 (time 1713489852 ; total 0 ; last 0) total: 130 unlinks in 0 seconds: inf unlinks/second Waiting for MDT destroys to complete PASS 59 (14s) debug_raw_pointers=0 debug_raw_pointers=0 resend_count is set to 4 4 resend_count is set to 4 4 resend_count is set to 4 4 resend_count is set to 4 4 resend_count is set to 4 4 == sanity test complete, duration 4073 sec =============== 21:24:25 (1713489865) === sanity: start cleanup 21:24:26 (1713489866) === === sanity: finish cleanup 21:25:20 (1713489920) === debug=super ioctl neterror warning dlmtrace error emerg ha rpctrace vfstrace config console lfsck